Operational comparison of rainfall-runoff models through hypothesis testing

James Fidal, Thomas Kjeldsen

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)
32 Downloads (Pure)


Assessing rainfall-runoff model performance and selecting the model best suited are important considerations in operational hydrology. However, often model choice is heuristic and based on a simplistic comparison of a single performance criterion without considering the statistical significance of differences in performance. This is potentially problematic as interpretation of a single performance criteria is subjective to the user. This paper removes the subjectivity by applying a jackknife split-sample calibration method to create a sample mean of performance for competing models which are used in a paired t-test allowing statements of statistical significance to be made. A second method is presented based on a hypothesis test in the binomial distribution, considering model performance across a group of catchments.

A case study comparing the performance of two rainfall-runoff models across 27 urban catchments within the Thames basin show that while the urban signal is difficult to detect on single catchment, it is significant across the group of catchments depending upon the choice of performance criteria. These results demonstrate the operational applicability of the new tools and the benefits of considering model performance in a probabilistic framework.
Original languageEnglish
Article number0402005
Pages (from-to)1-26
Number of pages26
JournalJournal of Hydrologic Engineering
Issue number4
Early online date6 Feb 2020
Publication statusPublished - 30 Apr 2020


Dive into the research topics of 'Operational comparison of rainfall-runoff models through hypothesis testing'. Together they form a unique fingerprint.

Cite this