TY - GEN
T1 - Decision Heuristics for Comparison: How Good Are They?
AU - Buckmann, Marcus
AU - Şimşek, Özgür
PY - 2017/8/14
Y1 - 2017/8/14
N2 - Simple decision heuristics are cognitive models of human and animal decision making. They examine few pieces of information and combine the pieces in simple ways, for example, by considering them sequentially or giving them equal weight. They have been studied most extensively for the problem of \textitcomparison, where the objective is to identify which of a given number of alternatives has the highest value on a specified (unobserved) criterion. We present the most comprehensive empirical evaluation of decision heuristics to date on the comparison problem. In a diverse collection of 56 real-world data sets, we compared heuristics to powerful statistical learning methods, including support vector machines and random forests. Heuristics performed surprisingly well. On average, they were only a few percentage points behind the best-performing algorithm. In many data sets, they yielded the highest accuracy in all or parts of the learning curve. The first part of the supplement describes implementation details of the algorithms tested; the second part describes the 56 public data sets used in the empirical analysis.
AB - Simple decision heuristics are cognitive models of human and animal decision making. They examine few pieces of information and combine the pieces in simple ways, for example, by considering them sequentially or giving them equal weight. They have been studied most extensively for the problem of \textitcomparison, where the objective is to identify which of a given number of alternatives has the highest value on a specified (unobserved) criterion. We present the most comprehensive empirical evaluation of decision heuristics to date on the comparison problem. In a diverse collection of 56 real-world data sets, we compared heuristics to powerful statistical learning methods, including support vector machines and random forests. Heuristics performed surprisingly well. On average, they were only a few percentage points behind the best-performing algorithm. In many data sets, they yielded the highest accuracy in all or parts of the learning curve. The first part of the supplement describes implementation details of the algorithms tested; the second part describes the 56 public data sets used in the empirical analysis.
M3 - Chapter in a published conference proceeding
T3 - Proceedings of Machine Learning Research
SP - 1
EP - 11
BT - Proceedings of the NIPS 2016 Workshop on Imperfect Decision Makers
ER -