Abstract
Traditionally, forecasters focus on the development algorithms to identify optimal models and sets of parameters, optimal in the sense of within-sample fitting. However, this quest strongly assumes that optimally set parameters will also give the best extrapolations. The problem becomes even more pertinent when we consider the vast volumes of data to be forecast in the big data era. In this paper, we argue if this obsession to optimality always bares the respective fruits or do we spend too much time and effort in the pursuit of it. Could we better off by targeting for faster and robust systems that would aim for suboptimal forecasting solutions which, in turn, would not jeopardise the efficiency of the systems under use? This study throws light to that end by means of an empirical investigation. We show the trade-off between optimal versus suboptimal solutions in terms of forecasting performance versus computational cost. Finally, we discuss the implications of suboptimality and attempt to quantify the monetary savings as a result of suboptimal solutions.
Original language | English |
---|---|
Pages (from-to) | 322-329 |
Number of pages | 8 |
Journal | Computers and Operations Research |
Volume | 98 |
Early online date | 11 May 2017 |
DOIs | |
Publication status | Published - 1 Oct 2018 |
Keywords
- Big data
- Forecasting
- Optimisation
- Retail
ASJC Scopus subject areas
- General Computer Science
- Modelling and Simulation
- Management Science and Operations Research
Fingerprint
Dive into the research topics of 'Forecasting for big data: does suboptimality matter?'. Together they form a unique fingerprint.Profiles
-
Fotios Petropoulos
- Management - Professor
- Information, Decisions & Operations - Chair in Management Science
- Smart Warehousing and Logistics Systems - Member
Person: Research & Teaching, Researcher