Humans are often characterized as Bayesian reasoners. Here, we question the core Bayesian assumption that probabilities reflect degrees of belief. Across 10 studies, we find that people instead reason in a digital manner, assuming that uncertain information is either true or false when using that information to make further inferences. Participants learned about two hypotheses, both consistent with some information but one more plausible than the other. Although people explicitly acknowledged that the less-plausible hypothesis had positive probability, they ignored this hypothesis when using the hypotheses to make predictions. This was true across several ways of manipulating plausibility (simplicity, evidence fit, base rates) and a diverse array of task variations. Explicitly quantifying the predictive probabilities was the only boundary condition we could find, but even then participants under-utilized the less-plausible hypothesis compared to normative standards. We discuss implications for philosophy of science and for the organization of the mind.