Opaque AI research tools could undermine trust and accuracy of scientific findings

Overdependence on ‘opaque’ artificial intelligence (AI) systems in research could make scientific findings less reliable and limit their usefulness to solving real world challenges, a report by the Royal Society has found.

The Science in the Age of AI report from the UK’s national academy of sciences explores the opportunities and challenges of machine learning and large-language models as transformative tools for 21st century research.

From coding and statistical analysis, to generating novel insights from vast datasets, AI tools are already transforming fields, from drug discovery to climate modelling.

However, the report warns, the complexity and ‘black box’ nature of sophisticated machine-learning models means their outputs cannot always be explained by the researchers using them.

This does not stop AI generating useful insights. However, a growing body of irreproducible AI and machine learning-based studies raises questions about the soundness of their conclusions. 

Ultimately, the report warns that unreliable or untrustworthy AI technologies pose risks to science and to society’s trust in its findings. 

To mitigate these challenges and maximise the benefits AI can bring, the report recommends: 

  • Establishing ‘open science’, environmental, and ethical frameworks for AI-based research to help assure findings are accurate, reproducible and support the public good. This could include agreements to make the data AI models are trained on available to researchers or ‘red teaming’ exercises to test the guardrails of what they can be used for.
  • Investing in ‘CERN-style’ regional and cross-sector AI infrastructure, to ensure all scientific disciplines can access the computing power and data resources to conduct rigorous research and maintain the competitiveness of non-industry researchers.
  • Promoting AI literacy among researchers and collaboration with developers to assure its accessibility and usability.

The peer-reviewed report was led by an expert working group of academics and industry figures and involved evidence reviews, interviews and workshops on emerging applications and trends in AI-supported, safety risks and the patent landscape.

Professor Alison Noble CBE FREng FRS, Vice President of the Royal Society and Technikos Professor of Biomedical Engineering, University of Oxford and Chair of the report’s working group, said:

“This report captures how the rapid adoption of AI across scientific disciplines is transforming research and enabling leaps in understanding that would not have been thinkable a decade ago.

“While AI systems are useful, they are not perfect. We should think of them almost like a scientific peer, they can offer valuable insights, but you would expect to be able to verify them yourself – and that isn’t always the case with current AI studies.”

“In my field, healthcare research, researchers have always struck a balance between confidentiality and transparency. Ensuring AI systems are as open as possible is a vital step to assuring its development benefits science and society.”

Dr Peter Dayan FRS, Director, Max Planck Institute for Biological Cybernetics and a member of the working group, said: 

“Science is constantly enjoying the introduction of disruptive new methodologies, from the microscope to computers, but adoption can be limited by things like cost, availability of the technology, and skills.

“Now machine learning is set to transform vast swathes of research, we need to ask what that requires in terms of access.

“The report’s recommendations look at how we can achieve that confluence of high-quality data, processing power and researcher skills in a suitably equitable manner.

“The UK has always been good at generating large volumes of high-quality data in everything from large cohort studies, like UK Biobank, to areas like neuroscience; for these machine learning will be a particularly critical tool.”

The pace of change in the AI-landscape is showcased in a review of the international patent filings up to 2021, undertaken by IP Pragmatics Limited on behalf of the report working group, which found that approximately 74% of all AI patent filings occurred in the previous five years.

It suggests that China and the US lead on, respectively, the number and value of patents filed, but the UK is well positioned for growth in this area. While it ranks 10th globally on patents, it is second in Europe behind Germany and has a 14.7% share of the AI life sciences market.