Formal methods have been largely thought of in the context of safety-critical systems, where they have achieved major acceptance. Tens of millions of people trust their lives every day to such systems, based on formal proofs rather than “we haven’t found a bug” (yet!); but why is “we haven’t found a bug” an acceptable basis for systems trusted with hundreds of millions of people’s personal data? This paper looks at some of these issues in cybersecurity, and the extent to which formal methods, ranging from “fully verified” to better tool support, could help. More importantly, recent policy reports and curricula initiatives appear to recommended formal methods in the limited context of “safety critical applications”; we suggest this is too limited in scope and ambition. Not only are formal methods needed in cybersecurity, the repeated and very public weaknesses of the cybersecurity industry provide a powerful motivation for formal methods.