Abstract
Assessing rainfall-runoff model performance and selecting the model best suited are important considerations in operational hydrology. However, model choice often is heuristic and based on a simplistic comparison of a single performance criterion without considering the statistical significance of differences in performance. This potentially is problematic because interpretation of a single performance criterion is subjective to the user. This paper removed the subjectivity by applying a jackknife split-sample calibration method to create a sample mean of performance for competing models which are used in a paired t-test, allowing statements of statistical significance to be made. A second method was presented based on a hypothesis test in the binomial distribution, considering model performance across a group of catchments. A case study comparing the performance of two rainfall-runoff models across 27 urban catchments within the Thames basin showed that although the urban signal was difficult to detect on single catchment, it was significant across the group of catchments depending upon the choice of performance criteria. These results demonstrated the operational applicability of the new tools and the benefits of considering model performance in a probabilistic framework.
Original language | English |
---|---|
Article number | 04020005 |
Pages (from-to) | 1-26 |
Number of pages | 26 |
Journal | Journal of Hydrologic Engineering |
Volume | 25 |
Issue number | 4 |
Early online date | 6 Feb 2020 |
DOIs | |
Publication status | Published - 30 Apr 2020 |
Keywords
- Comparison techniques
- Hydrological model
- Hypothesis test
- Jackknife split-sample
- Operational
- Statistical significance
- Uncertainty analyses
ASJC Scopus subject areas
- Environmental Chemistry
- Civil and Structural Engineering
- Water Science and Technology
- Environmental Science(all)