
Are All Papers Created Equal?
And Why Are We Insisting on Treating Them as Such?
For an industry that prides itself on progress, scientific publishing has a strange habit of rewarding conformity.
Every year, over three million scientific papers are published—a figure that doubles roughly every nine years. This massive output rests on the assumption that each publication adds a measurable contribution to the world’s knowledge base. But in practice, the academic system treats nearly all papers the same, relying on blunt proxies like citation counts, journal prestige, and impact factors to infer value.
These metrics reward safe, incremental research. Paradigm-challenging work, which by nature resists easy categorization or immediate validation, often lingers in obscurity. Many of the most important ideas in science were first ignored, misunderstood, or marginalized—and the current system does little to surface such work earlier.
That gap is what a new platform called Preprint Watch is trying to address. Instead of using citations as a proxy for importance, Preprint Watch classifies scientific papers based on their epistemic role—where they fall in the broader arc of scientific development. The tool doesn’t care how many people are reading a paper. It’s looking for signs that a preprint may indicate the early stages of a conceptual shift.
At the heart of the platform is a deep-reasoning agent called iKuhn, named after philosopher Thomas Kuhn, whose 1962 book The Structure of Scientific Revolutions outlined how science progresses through periodic upheavals rather than smooth accumulation. Preprint Watch applies this model on a set of semantic ontologies that classify preprints according to where they contribute to the so-called Kuhnian cycle:
- Normal Science (refinement within a dominant model)
- Model Drift (early signs of theoretical stress)
- Model Crisis (systematic contradictions)
- Model Revolution (emergence of alternative frameworks)
- Paradigm Shift (replacement of the reigning model)
These classifications are assigned algorithmically by analysing the full text of preprints from sources like arXiv, bioRxiv, and medRxiv. The result is a signal that is fundamentally distinct from bibliometric signals like citation count, views, or downloads. This signal tracks the progress of science across disciplines, something that researchers never thought possible.
For now, the platform delivers these signals via a public reporting feed and monthly digest. But the broader goal is to offer decision-makers—funders, publishers, R&D managers—an early detection and reconnaissance system for scientific research and discovery. “Even a 0.1% failure to detect disruptive research equates to billions of dollars in misallocated funding,” says co-founder Dr. Khalid Saqr. “This isn’t just about science—it’s about capital efficiency.”
This May, Saqr, engineering simulation expert and founder of KNOWDYN, launched Preprint Watch with Dr. Gareth Dyke, a renowned palaeontologist and former publishing executive with over 300 academic papers to his name. Together, they built the platform as a response to the limitations they saw in both academic gatekeeping and automated research tools. Their aim is to offer a new layer of epistemic inquiry to evaluate scholarly communication focused on epistemic contribution and position in the progress of science.
One of the most compelling aspects of the project is how little it asks of the user. It doesn’t require researchers to change how they write, publish, or tag their work. Instead, it reinterprets the research using a well-established philosophical model, translated into a computational framework.
In 2026, the team plans to introduce the Thomas Kuhn Prize, an annual award for the most disruptive preprint surfaced by the system. Unlike traditional prizes, this won’t be based on votes or nominations: The goal is to reward papers that challenge the foundations of their field—particularly those coming from outside elite institutions.
Though still early in its lifecycle, Preprint Watch is attracting interest among researchers sceptical of current incentives. It accepts preprint submissions freely from the scientific community, while the system is continuously monitored for signal anomalies. But the broader idea—that epistemic contribution deserves its own measurement standard—is gaining traction. After the pandemic, preprints are increasingly becoming vital social contracts that researchers, funders, and policymakers rely on to act without having to wait the peer review stamp from slow journals and inefficient society committees.
Critics may question whether a machine can accurately apply Thomas Kuhn reasoning and classify scholarly communications, or whether any system can reliably detect innovation in real time. But the platform’s creators are quick to clarify that they’re not claiming to predict Nobel prizes. “We’re not scoring papers,” Dyke explains. “We’re contextualizing them—mapping their relationship to the conceptual structures they affirm, extend, confront, or destabilize.”
The reality is: What emerges from Preprint Watch is not a ranking rubric as it may seem, but an entirely new type of content. The classification reports are well structured, explainable, and inspects preprints with a critical lens that provide immediate call to action for the stakeholders of science. It may also offer a supplementary decision-support feed to improve editorial judgment, reduce funding waste, and to make discovery pipelines more responsive to underlying shifts in scientific knowledge.
Of course, this kind of signal intelligence isn’t infallible. But that’s beside the point. What matters is that Preprint Watch is trying to measure something most platforms ignore: the trajectory of ideas, not just their afterglow.
If it works, it could help fix one of the most persisting inefficiencies in science. If it doesn’t, it still asks a question worth repeating: Are all papers created equal? Because the future of scientific progress may depend on how we choose to answer it.