Forecasting for big data: does suboptimality matter?

Konstantinos Nikolopoulos, Fotios Petropoulos

Research output: Contribution to journalArticle

10 Citations (Scopus)
119 Downloads (Pure)

Abstract

Traditionally, forecasters focus on the development algorithms to identify optimal models and sets of parameters, optimal in the sense of within-sample fitting. However, this quest strongly assumes that optimally set parameters will also give the best extrapolations. The problem becomes even more pertinent when we consider the vast volumes of data to be forecast in the big data era. In this paper, we argue if this obsession to optimality always bares the respective fruits or do we spend too much time and effort in the pursuit of it. Could we better off by targeting for faster and robust systems that would aim for suboptimal forecasting solutions which, in turn, would not jeopardise the efficiency of the systems under use? This study throws light to that end by means of an empirical investigation. We show the trade-off between optimal versus suboptimal solutions in terms of forecasting performance versus computational cost. Finally, we discuss the implications of suboptimality and attempt to quantify the monetary savings as a result of suboptimal solutions.
Original languageEnglish
Pages (from-to)322-329
Number of pages8
JournalComputers and Operations Research
Volume98
Early online date11 May 2017
DOIs
Publication statusPublished - 1 Oct 2018

    Fingerprint

Keywords

  • Big data
  • Forecasting
  • Optimisation
  • Retail

ASJC Scopus subject areas

  • Computer Science(all)
  • Modelling and Simulation
  • Management Science and Operations Research

Cite this