Abstract
Assessing rainfall-runoff model performance and selecting the model best suited are important considerations in operational hydrology. However, often model choice is heuristic and based on a simplistic comparison of a single performance criterion without considering the statistical significance of differences in performance. This is potentially problematic as interpretation of a single performance criteria is subjective to the user. This paper removes the subjectivity by applying a jackknife split-sample calibration method to create a sample mean of performance for competing models which are used in a paired t-test allowing statements of statistical significance to be made. A second method is presented based on a hypothesis test in the binomial distribution, considering model performance across a group of catchments.
A case study comparing the performance of two rainfall-runoff models across 27 urban catchments within the Thames basin show that while the urban signal is difficult to detect on single catchment, it is significant across the group of catchments depending upon the choice of performance criteria. These results demonstrate the operational applicability of the new tools and the benefits of considering model performance in a probabilistic framework.
A case study comparing the performance of two rainfall-runoff models across 27 urban catchments within the Thames basin show that while the urban signal is difficult to detect on single catchment, it is significant across the group of catchments depending upon the choice of performance criteria. These results demonstrate the operational applicability of the new tools and the benefits of considering model performance in a probabilistic framework.
Original language | English |
---|---|
Article number | 0402005 |
Pages (from-to) | 1-26 |
Number of pages | 26 |
Journal | Journal of Hydrologic Engineering |
Volume | 25 |
Issue number | 4 |
Early online date | 6 Feb 2020 |
DOIs | |
Publication status | Published - 30 Apr 2020 |