Esc
Statistics & Data

Science is Mostly Happy Accidents

Seven out of ten scientific studies find something nobody planned to find. That's not a failure rate. That's how science actually works.

The intuitive picture of science is comfortingly linear: researchers spot a problem, design an experiment to solve it, execute the plan, publish the results. We imagine white-coated precision, hypothesis confirmed or rejected, knowledge advancing in orderly steps. Textbooks reinforce this. Grant proposals certainly sell themselves this way. But this narrative is mostly fiction. It's the version of science we tell ourselves when we're trying to look serious. The real version is messier, more chaotic, and honestly more interesting than that.

According to a 2025 analysis of 1.2 million biomedical publications published in Nature, approximately 70 percent of published research reported outcomes that diverged from what the scientists originally proposed in their funding applications. The study examined grant proposals and their corresponding publications, tracking what researchers said they'd do versus what they actually reported discovering. The gap was enormous. This wasn't a small methodological correction or a minor pivot—these were genuine surprises, unexpected findings that changed the direction of the work or revealed something entirely new. The researchers weren't publishing null results in some dusty archive. These papers made it into the literature. They mattered enough to report.

What's striking is that this wasn't framed as a problem. The study didn't find that 70 percent of research was sloppy or failed to replicate. It found that 70 percent of research discovered things worth publishing that nobody had predicted beforehand. When you dig into the actual papers, the pattern becomes clear: a researcher studying one disease finds a mechanism that applies to another. Someone testing a drug for diabetes notices it affects immune function in an unexpected way. A technique developed for one purpose turns out to be brilliant for something else entirely. These aren't failures of the scientific method. They're the scientific method actually working.

The mechanism behind this paradox has both institutional and cognitive roots. Grant agencies want certainty—they want to fund researchers with clear hypotheses and defined endpoints, because that's easier to evaluate. Scientists, needing funding, frame their proposals in confident, predictable language. They can't say "I'm going to poke around and see what happens." So they perform a kind of translation: the actual exploration becomes a tightly described experiment in the proposal. Then reality happens. The experiment runs. New data arrives. Unexpected patterns emerge. A good scientist follows the data, not the proposal. And if the findings are genuinely interesting—if they represent real knowledge—they get published, proposal be damned.

This also reflects a feature of how science actually progresses. Every answer raises three new questions. Every experiment, if done carefully, generates noise alongside signal—and sometimes that noise is where the action is. The most transformative discoveries in scientific history (penicillin, X-rays, the discovery of radioactivity) emerged from unexpected observations in experiments designed to answer completely different questions. What's changed is that we now have enough data to see that this isn't exceptional. It's the baseline.

The practical implication is strange but important: if you want to fund transformative science, maybe you should stop demanding that scientists predict it in advance. The current system implicitly punishes researchers for being surprised, because surprise means deviation from proposal. Yet surprise is often where the knowledge lives. Grant agencies have started to notice this. Some are experimenting with funding models that leave room for productive wandering, that reward unexpected findings rather than punishing them as program drift. The data suggests they should go further. The next generation of discoveries might depend on it.