Home Blog Page 7

U.S. News Releases 2025-2026 Best Global Universities Rankings

U.S. News & World Report, the global authority in education rankings and consumer advice, today published the 2025-2026 Best Global Universities rankings, which evaluate more than 2,250 schools on academic research and reputation.

The latest edition includes universities from more than 100 countries. The following countries have the most schools in the overall rankings:

  • China: 397
  • United States: 280
  • India: 118
  • Japan: 104
  • United Kingdom: 93

“The Best Global Universities rankings provide valuable benchmarks for students considering international study options or seeking institutions with a strong global research impact,” said LaMont Jones, Ed.D., managing editor for Education at U.S. News. “The rankings – one of many tools that U.S. News offers to assist with college decision-making – help students identify universities with worldwide recognition and cross-border academic excellence that can serve as a launching pad for their careers.”

Of the 51 subject rankings, eight will have more schools ranked this year, including: agricultural sciences; artificial intelligence; computer science; electrical and electronic engineering; engineering; gastroenterology and hepatology; plant and animal science and polymer science.

Powered by data and metrics from the Web of Science Core Collection and InCites Benchmarking & Analytics provided by Clarivate, a leading global provider of transformative intelligence, the Best Global Universities methodology weighs factors that measure a university’s global and regional research reputation and academic research performance.

For the overall ranking, this includes bibliometric indicators such as publications, citations and international collaboration. Each of the 51 subject rankings has its own weighting based on academic research performance in that specific area.

2025-2026 U.S. News & World Report Best Global Universities

*See the full rankings here.

Overall Best Global Universities – Top 10

  1. Harvard University (U.S.)
  2. Massachusetts Institute of Technology (MIT) (U.S.)
  3. Stanford University (U.S.)
  4. University of Oxford (U.K.)
  5. University of Cambridge (U.K.)
  6. University of California Berkeley (U.S.)
  7. University College London (U.K.)
  8. University of Washington Seattle (U.S.)
  9. Yale University (U.S.)
  10. Columbia University (U.S.)

Africa – Top 3

  1. University of Cape Town (South Africa)
  2. Cairo University (Egypt)
  3. University of Witwatersrand (South Africa)

Asia – Top 3

  1. Tsinghua University (China)
  2. National University of Singapore (Singapore)
  3. Peking University (China)

Australia/New Zealand – Top 3

  1. University of Sydney (Australia)
  2. University of Melbourne (Australia)
  3. University of New South Wales Sydney (Australia)

Europe – Top 3

  1. University of Oxford (U.K.)
  2. University of Cambridge (U.K.)
  3. University College London (U.K.)

Latin America – Top 3

  1. Universidade de Sao Paulo (Brazil)
  2. Universidade Estadual de Campinas (Brazil)
  3. Pontificia Universidad Catolica de Chile (Chile)

The Best Global Universities rankings serve the broader U.S. News mission of providing trusted information and rankings – such as Best High SchoolsBest Colleges and Best Online Programs – to empower all students in making informed choices about their education.

For more information on the Best Global Universities, visit X (formerly Twitter), TikTokFacebook and Instagram using #BestGlobal.

Critical gaps in ethical publishing knowledge among researchers in China, reveals new survey

Joint study by Taylor & Francis and National Science Library of the Chinese Academy of Sciences is published in the Journal of Data and Information Science

A new survey has revealed a widespread lack of clarity among researchers in China regarding ethical publishing practices, particularly when third-party manuscript services are involved. Analyzing the results of the survey, the authors of a study published in the Journal of Data and Information Science highlight the urgent need for all researchers to receive comprehensive and fit-for-purpose ethics education.

Conducted as part of a pioneering collaboration between international publisher Taylor & Francis and the National Science Library at the Chinese Academy of Sciences (CAS), the survey of 1,777 students, researchers and librarians, demonstrates high levels of uncertainty about research and publishing ethics.

Understanding the responsibilities that come with article authorship is an area of confusion for 35.9% of survey respondents, with masters students reporting the highest levels of confusion. In addition, a notable proportion report having engaged in practices that constitute ‘gift authorship’; that is adding an author to a paper, or agreeing to be named as an author, when proper authorship criteria are not met.

The study also explores researchers’ perceptions of the services offered by third parties to support publication in international journals. While 31% of respondents report using third-party services, there is limited ability to recognize some of the unethical options offered. Most respondents correctly recognized services such as language editing, translation and formatting to be acceptable. However, a concerning number of respondents also consider activities typically carried out by paper mills, such as writing parts of a paper or adding authors and citations of the agent’s own choosing, to be acceptable.

The authors of the study conclude that there needs to be timely, accessible, fit-for-purpose training in research integrity and publishing ethics for researchers at all levels. This should include undergraduates and those at institutions who are responsible for upholding overall integrity standards. Core topics such as authorship responsibilities and working with ethical third-party manuscript services must be included in mandatory training. Only 55.4% of the survey respondents said they currently had any access to training, and an even smaller proportion to formal training.

The quality of current training is also called into question by the report. While researchers with formal training display more awareness and concern about ethical issues, many are still unable to confidently identify questionable practices, particularly around authorship.

The survey was developed by the Joint Lab on Research Integrity, a project supported by Taylor & Francis and the National Science Library at CAS. Established in December 2023, the Lab aims to better understand and address research ethics challenges in China by combining publisher expertise with direct research institution experience and insights.

Dr Sabina Alam, Director of Publishing Ethics & Integrity at Taylor & Francis, said: “The findings of our survey highlight the urgent need for training for students and researchers at all levels in China, a finding which we believe to be applicable to many students and researchers across the world. Until then, the knowledge gaps we’ve discovered leave researchers vulnerable to exploitation by unethical organizations, such as paper mills, and many may unknowingly engage in misconduct.”

“It is no wonder that 80% of those who responded to our survey are currently concerned about the impact of research integrity issues on the trustworthiness of research publications,” Alam added. “Partnerships between publishers and research institutions will be key to tackling global research integrity challenges, including the development and implementation of comprehensive training in research integrity and publishing ethics. A key motivation for establishing our collaboration with the National Science Library at CAS was to explore critical issues and we believe these results from our Joint Lab show the benefits of working together in this way.”

Dr Zhesi Shen, Deputy Director of the Department for Scientometrics and Research Evaluation at the National Science Library, CAS, said: “Research institutions and libraries play a vital role in educating the next generation of researchers about academic integrity standards and their implementation. Through close collaboration with publishers, these organizations can leverage their complementary strengths to develop and deliver systematic training programs tailored to local needs, while collectively addressing global research integrity challenges”

Research article: ‘Perceptions and recommendations about research integrity and publishing ethics: a survey among Chinese researchers on training, challenges and responsibilities’ by Sabina Alam, Victoria Babbit, Jason Hu, Ying Lou, Zhesi Shen, Laura Wilson and Zhengyi Zhou. Journal of Data and Information Science

Cactus Communications and NIT Calicut Partner to Bring AI Research Tools to Academics

In a landmark collaboration that underscores India’s growing emphasis on AI-driven research excellence, Cactus Communications (CACTUS), a leading technology company specializing in AI-powered solutions and expert services for the scholarly publishing ecosystem, has partnered with the National Institute of Technology Calicut (NIT Calicut) to provide institution-wide access to its advanced AI research tools via the AI toolkit, Editage Plus.

This collaboration will enable NIT Calicut’s students, faculty, and researchers to leverage AI effectively to enhance research discovery, improve writing quality, and streamline scholarly publishing. By integrating these tools into its academic ecosystem, NIT Calicut joins a growing movement of Indian institutions that are embracing AI to accelerate research productivity and improve global impact.

With access to CACTUS’s AI suite, academics at NIT Calicut will now have an advanced toolkit to enhance efficiency at every research stage:

  • Paperpal: The all-in-one academic writing and research assistant trusted by 2M+ academics.
  • R Discovery: The top-rated literature search and research reading platform with 3M+ users.
  • Mind the GraphThe accurate scientific illustration tool trusted by 100+ leading institutions.
  • Global Journal Database: The best journal finder with a database of 43K+ verified journals.

Akhilesh Ayer, CEO, Cactus Communications, emphasized the significance of this partnership:“We are witnessing an increased adoption of AI tools among top academic institutions globally, and our partnership with NIT Calicut reflects this trend. We have an unwavering commitment towards helping researchers reach their highest potential by empowering them with advanced technology that simplifies, enhances and accelerates their research journey.”

Echoing this sentiment, Prof. Prasad Krishna, Director, NIT Calicut remarked, “NIT Calicut has always been at the forefront of technological advancements. By incorporating the CACTUS’s Editage Plus toolkit into our academic framework, we aim to equip our academics with cutting-edge AI research tools that will enhance their scholarly contributions and global impact.” 

Nishchay Shah, Group CTO and EVP, Products & AI, Cactus Communications, highlighted CACTUS’s broader vision: “This collaboration with NIT Calicut underscores our mission to create AI solutions that directly address the unique challenges faced by researchers and institutions. By providing access to trusted AI-driven tools, we are enabling academics to focus on what truly matters—innovative and impactful research.”

Focused on innovation and excellence, CACTUS remains dedicated to empowering researchers by working closely with leading universities across the world, helping them transition to smarter, AI-enabled academic workflows. As Indian research institutions continue to evolve, collaborations like this one between CACTUS and NIT Calicut pave the way for a future where technology and academia work hand in hand to drive global research impact.

Cadmore Media Announces JAMA Network, AMA Ed Hub will Use Platform to Streamline Video Publishing

Cadmore Media is pleased to announce that the publishing groups at the American Medical Association (AMA), JAMA NetworkTM and the AMA Ed HubTM, have selected its platform as the solution for video hosting and streaming.

This collaboration marks a major advancement in the AMA’s ability to deliver live and on-demand video across its publishing and educational platforms. With the full migration of JAMA Network and AMA Ed Hub’s video archive to Cadmore, both journal and educational content can now livestream an event and automatically keep the same video on demand for future consumption without requiring re-publication, providing a seamless, multimedia-rich experience for users. 

At the core of the solution is a configurable backend workflow developed by product management teams at the AMA and Cadmore. This toolkit allows AMA and JAMA teams to publish video across multiple websites simultaneously—supporting both real-time and on-demand content—with automatic XML-based archiving to ensure long-term access and compliance with scholarly publishing standards. 

The implementation integrates with the AMA’s existing content systems, including its front-end websites, and supports consistent branding, accessibility and metadata compliance. 

Key features of this solution include: 

  • Customized video player design aligned with the network’s brand and publishing standards, including automatic transcription, chaptering, sharing, chat moderation, and more to support the dissemination of scholarly content
  • Embedded video within JAMA Network articles and the AMA Ed Hub
  • A centralized, scalable workflow for live streaming and multi-site publishing
  •  Integration with the JAMA Network’s content repository and delivery website
  • XML-based archiving for long-term discoverability and compliance 

“Our work with Cadmore Media gives us turnkey control over live and on-demand video publishing,” said Paul Gee, VP of Digital Product Management and Development at the JAMA Network. “The streamlined workflow means we can deliver content wherever it’s needed, quickly and reliably, while also meeting our metadata and archiving needs.” 

Violaine Iglesias, CEO and Co-Founder of Cadmore Media, added: “Partnering with a network as respected as JAMA reflects our commitment to helping scholarly and professional publishers bring video to the forefront of communication. It’s an honor to support the JAMA Network and the AMA EdHub with tools that make high-impact multimedia publishing simple and scalable.” 

IOP Publishing makes supplementary research data more visible 

IOP Publishing (IOPP) is advancing open science and research impact by assigning Digital Object Identifiers (DOIs) to supplementary data files submitted alongside research papers. This initiative will make supplementary materials more Findable, Accessible, Interoperable, and Reproducible (FAIR), ensuring that authors receive greater recognition for their contributions beyond the primary research article.

A 2020 study demonstrated that articles with linked data receive a citation advantage, showing the value of this initiative. Supplementary files are often not easily discoverable or citable, which limits their impact.  

Benefits for authors and researchers:

• Increased recognition and credit: Supplementary files will become formally citable, allowing authors to gain recognition for a broader range of research outputs.

• Enhanced discoverability: Data will be prominently linked on article pages, have its own dedicated ‘article’ pages on IOPscience, and be easily found through search engines.

• Support for funder open science policies: Authors will be better supported to meet some funders’ requirements to publicly share data associated with publicly funded research.

“Assigning DOIs to supplementary data is another step for IOPP toward making research more transparent and interconnected. It will help authors get the credit they deserve for more aspects of their work while making it easier for them to find and access relevant data. This initiative advances our journey toward a more FAIR research ecosystem and strengthens our commitment to open science, aligning with the global shift toward open data and scientific reproducibility,” says Daniel Keirs, Head of Journal Strategy and Performance at IOP Publishing.

IOPP’s decision to assign DOIs to supplementary research data was taken in response to feedback from the research community. A survey of researchers publishing with IOPP found that they see improving links between articles and relevant data as one of their highest priorities for publishers. Researchers also expressed their support for assigning persistent identifiers to supplementary data after it was trialled on selected journals.

MDPI Books Launches Brand New Social Sciences, Arts & Humanities Competition

An opportunity not to be missed! MDPI Books warmly invites all Social Sciences, Arts, and Humanities (SSAH) researchers to submit a proposal to its inaugural book competition. The winning entries will be published free of charge on the MDPI Books platform. The selected authors will be able to showcase their expertise to a global audience, contributing to broader conversations on the role of SSAH in addressing key global challenges.

Entries will be judged by subject experts from MDPI Books. The judges are looking for entries that make an important contribution to knowledge and demonstrate the following:

  • An original contribution to the field;
  • A clear, well-structured argument;
  • Methodological soundness;
  • Relevance to current societal or academic debates and clear engagement with the state of the art;
  • Interdisciplinary potential.

Selected proposals will be invited to proceed to the next stage, which includes peer review in accordance with MDPI Books’ editorial standards.

Submissions are now open, closing on 31 August 2025.

“Researchers in the Social Sciences, Arts, and Humanities (SSAH) make substantial contributions to public discourse and explore topics of broad public interest,” says Laura Wagner, head of books division at MDPI. “However, they often work within formats such as books and may have limited financial resources for open access publishing. At MDPI, we recognize both the value of their work and the challenges they face and we are committed to supporting greater accessibility and impact in these fields. The competition will broaden accessibility and impact in these fields of research by providing the opportunity to publish in open access at no cost to the authors, helping their work reach wider audiences.”

Eligibility

  • The competition is open to researchers working within the Social Sciences, Arts, and Humanities.
  • Proposals can be submitted by an individual or predefined team of authors.
  • Both monographs and edited volumes with a clear, coherent concept are eligible. 
  • Proposed books should aim for a length of between 30,000 and 90,000 words.
  • All proposals must be original, unpublished, and intended for open access publication under a CC BY license with MDPI Books.
  • There are no restrictions based on career stage, geographic location, or institutional affiliation.

Submitting a Proposal

Entries can be made by completing an online application form, following the Guidelines for Manuscript Submission. All proposals must clearly indicate they are being submitted for the ‘Social Sciences, Arts and Humanities Book Competition’. Book publication is contingent on successful peer review and final approval processes. Authors of non-selected proposals will be contacted to explore alternative funding options or publication pathways with MDPI Books.

New Preprint Watch platform Launched

Are All Papers Created Equal?

And Why Are We Insisting on Treating Them as Such?

For an industry that prides itself on progress, scientific publishing has a strange habit of rewarding conformity.

Every year, over three million scientific papers are published—a figure that doubles roughly every nine years. This massive output rests on the assumption that each publication adds a measurable contribution to the world’s knowledge base. But in practice, the academic system treats nearly all papers the same, relying on blunt proxies like citation counts, journal prestige, and impact factors to infer value.

These metrics reward safe, incremental research. Paradigm-challenging work, which by nature resists easy categorization or immediate validation, often lingers in obscurity. Many of the most important ideas in science were first ignored, misunderstood, or marginalized—and the current system does little to surface such work earlier.

That gap is what a new platform called Preprint Watch is trying to address. Instead of using citations as a proxy for importance, Preprint Watch classifies scientific papers based on their epistemic role—where they fall in the broader arc of scientific development. The tool doesn’t care how many people are reading a paper. It’s looking for signs that a preprint may indicate the early stages of a conceptual shift.

At the heart of the platform is a deep-reasoning agent called iKuhn, named after philosopher Thomas Kuhn, whose 1962 book The Structure of Scientific Revolutions outlined how science progresses through periodic upheavals rather than smooth accumulation. Preprint Watch applies this model on a set of semantic ontologies that classify preprints according to where they contribute to the so-called Kuhnian cycle:

  • Normal Science (refinement within a dominant model)
  • Model Drift (early signs of theoretical stress)
  • Model Crisis (systematic contradictions)
  • Model Revolution (emergence of alternative frameworks)
  • Paradigm Shift (replacement of the reigning model)

These classifications are assigned algorithmically by analysing the full text of preprints from sources like arXiv, bioRxiv, and medRxiv. The result is a signal that is fundamentally distinct from bibliometric signals like citation count, views, or downloads. This signal tracks the progress of science across disciplines, something that researchers never thought possible.

For now, the platform delivers these signals via a public reporting feed and monthly digest. But the broader goal is to offer decision-makers—funders, publishers, R&D managers—an early detection and reconnaissance system for scientific research and discovery. “Even a 0.1% failure to detect disruptive research equates to billions of dollars in misallocated funding,” says co-founder Dr. Khalid Saqr. “This isn’t just about science—it’s about capital efficiency.”

This May, Saqr, engineering simulation expert and founder of KNOWDYN, launched Preprint Watch with Dr. Gareth Dyke, a renowned palaeontologist and former publishing executive with over 300 academic papers to his name. Together, they built the platform as a response to the limitations they saw in both academic gatekeeping and automated research tools. Their aim is to offer a new layer of epistemic inquiry to evaluate scholarly communication focused on epistemic contribution and position in the progress of science.

One of the most compelling aspects of the project is how little it asks of the user. It doesn’t require researchers to change how they write, publish, or tag their work. Instead, it reinterprets the research using a well-established philosophical model, translated into a computational framework.

In 2026, the team plans to introduce the Thomas Kuhn Prize, an annual award for the most disruptive preprint surfaced by the system. Unlike traditional prizes, this won’t be based on votes or nominations: The goal is to reward papers that challenge the foundations of their field—particularly those coming from outside elite institutions.

Though still early in its lifecycle, Preprint Watch is attracting interest among researchers sceptical of current incentives. It accepts preprint submissions freely from the scientific community, while the system is continuously monitored for signal anomalies. But the broader idea—that epistemic contribution deserves its own measurement standard—is gaining traction. After the pandemic, preprints are increasingly becoming vital social contracts that researchers, funders, and policymakers rely on to act without having to wait the peer review stamp from slow journals and inefficient society committees.

Critics may question whether a machine can accurately apply Thomas Kuhn reasoning and classify scholarly communications, or whether any system can reliably detect innovation in real time. But the platform’s creators are quick to clarify that they’re not claiming to predict Nobel prizes. “We’re not scoring papers,” Dyke explains. “We’re contextualizing them—mapping their relationship to the conceptual structures they affirm, extend, confront, or destabilize.”

The reality is: What emerges from Preprint Watch is not a ranking rubric as it may seem, but an entirely new type of content. The classification reports are well structured, explainable, and inspects preprints with a critical lens that provide immediate call to action for the stakeholders of science. It may also offer a supplementary decision-support feed to improve editorial judgment, reduce funding waste, and to make discovery pipelines more responsive to underlying shifts in scientific knowledge.

Of course, this kind of signal intelligence isn’t infallible. But that’s beside the point. What matters is that Preprint Watch is trying to measure something most platforms ignore: the trajectory of ideas, not just their afterglow.

If it works, it could help fix one of the most persisting inefficiencies in science. If it doesn’t, it still asks a question worth repeating: Are all papers created equal? Because the future of scientific progress may depend on how we choose to answer it.

Elsevier Unveils Rigorous Evaluation Framework to Mitigate Risk in Generative AI Clinical Decision Support Tools 

Elsevier, a global leader in medical information and data analytics, unveiled a groundbreaking evaluation framework for assessing the performance and safety of generative AI-powered clinical reference tools. This innovative approach has been developed for all Elsevier Health generative AI solutions, including ClinicalKey AI, Elsevier’s advanced clinical decision support platform, and sets a new standard for responsible AI integration in healthcare. It will be featured in a future issue of the Open Access Journal of the American Medical Informatics Association (JAMIA).  

The framework, designed with input from clinical subject matter experts across multiple specialties, evaluates AI-generated responses along five critical dimensions: query comprehension, response helpfulness, correctness, completeness, and potential for clinical harm. It serves as a comprehensive assessment to ensure that AI-powered tools not only provide accurate and relevant information but also align with the practical and current needs of healthcare professionals at the point of care. 

Omry Bigger, President of Clinical Solutions at Elsevier: “This evaluation framework not only supports innovation and advancements to improve patient care but adds an extra layer of review and assessment to ensure physicians are armed with the most accurate information possible. It’s a critical step in the implementation of responsible AI for healthcare providers and patients.”  

In a recent evaluation study of ClinicalKey AI, Elsevier worked with a panel of 41-board certified physicians and clinical pharmacists to rigorously test responses generated by the tool for a diverse set of clinical queries. That panel evaluated 426 query-response pairs, and results demonstrated impressive performance, with 94.4% of responses rated as helpful, 95.5% assessed as completely correct, with just 0.47% flagged for potential improvements. 

Leah Livingston, Director of Generative AI Evaluation for Health Markets at Elsevier, said: “These results reflect not just strong performance, but the real value of bringing clinicians into the evaluation process. By designing an evaluation framework around what matters most to physicians—accuracy, relevance, and clinical safety —we’re helping ensure that AI tools truly add value to care delivery. This approach supports clinicians in quickly accessing the right information, ultimately reducing cognitive burden.” 

Elsevier is continuing to implement AI responsibly in its portfolio of AI solutions and is also involved in industry-wide initiatives. As a proud partner of the Coalition for Health AI, the company is actively contributing to industry-wide standards for responsible AI deployment in healthcare settings. 

The release of the evaluation framework represents a significant step forward in the responsible integration of AI technologies in healthcare, paving the way for more efficient, accurate, and patient-centered clinical decision-making.  

To learn more details about the evaluation framework, a white paper is available for download: Evaluation framework for generative AI tools used for clinical decision support

Clarivate Partners with American Library Association to Advocate for U.S. Libraries

Clarivate Plc, a leading global provider of transformative intelligence, today announced a new milestone in its decades-long partnership with the American Library Association (ALA). Clarivate will be the first sponsor of the ALA Public Supporter Program, which engages the public in supporting libraries and library professionals.

The ALA is the largest library association in the world. It advocates for libraries, library workers, and everyone they serve, from small, rural libraries to the largest library systems in the country.

The Public Supporter Program, which launched on Feb. 10, 2025, provides the public with access to valuable information and resources about library advocacy, news, and ways to get involved in protecting libraries. The program aims to bolster ALA’s efforts to ensure libraries continue to provide essential services and resources to communities, promoting literacy, education, and access to information. ALA is a nonprofit, nonpartisan organization.

Bar Veinstein, President Academia & Government at Clarivate, said: “Libraries have always been a cornerstone of education, research, and access to information. We are proud to stand and partner with the American Library Association as leading library advocates in the U.S., providing them with the resources and support they need to continue their vital work.”

Contributions to the Public Supporter Program will help advance key ALA initiatives including:

  • Library funding: The ALA advocates for funding in the halls of Congress and state and local governments.
  • Library grants: The ALA provides grant opportunities to small and rural libraries.
  • Right to read: The ALA champions everyone’s right to read, without censorship.
  • Internet for all: The ALA advocates for broadband funding.

ALA President Cindy Hohl expressed her gratitude to Clarivate for this support.

“We are grateful for Clarivate’s partnership over the years, and we especially want to thank them for their deepening commitment to ALA and libraries. As the first sponsor of our new Public Supporter Program, Clarivate is truly demonstrating they are FOR OUR LIBRARIES,” Hohl said.

The Clarivate contribution to the Public Supporter Program runs for two years, until 2027. Learn more about how Clarivate advocates for libraries.

Data released in this year’s independent Nature Index Research Leaders tables shows a shift in global research landscape

China has extended its lead in research output, according to data released in the latest Nature Index Research Leaders(data refers to full year 2024 data only – see notes to editors). The country’s Share, the Nature Index’s key metric of author contribution to high-quality research, reached 32,122, a 17% increase on 2023, with the region now having eight institutions in the top 10 compared to 7 in 2023. Asian countries as a whole enjoyed greater dominance, with drops seen from Western institutions in the number of top positions held within the rankings.

The Nature Research Leaders is released annually and based on data from the previous year. It is part of the Nature Index which tracks contributions to research articles published in 145 high-quality natural science and health science publications, from many publishers, and selected by an independent group of researchers.

“The data reflect a profound shift in the global research landscape,” Simon Baker, Chief Editor, Nature Index, said: “China’s continued investment in science and technology is translating into rapid sustained growth in high-quality research output, which in areas such as physical sciences and chemistry is now far outstripping previously dominant Western nations, including the US.”


Other key regional analysis from this years’ table showed:

  • Strong growth in research output for countries across Asia. South Korea and India were the only two other countries in the top 10, other than China, to increase their adjusted Share from 2023 — by 4.1% and 2% respectively. South Korea rose to 7th place in the overall rankings, overtaking Canada. Singapore, ranked 16th, up from 18th, posted a 7% increase — the second-largest increase among the top 20 countries after China. Japan was one exception with a 9% decrease.
  • Previously dominant Western nations recorded a decline in their adjusted Share for the second year in a row with Canada, France, Switzerland, the UK and US, all recording declines of at least 7%. Australia and Germany showed declines of less than 3%.

On an institutional level: 

  • Chinese institutions now occupy eight of the top 10 positions in the institutional rankings. The Chinese Academy of Sciences (CAS) retained its top spot. The University of Science and Technology of China takes 3rd place, while Zhejiang University (Share 819.57) climbed from 10th to 4th.
  • Several Western institutions saw a decline in ranking.  Germany’s Max Planck Society fell from 4th to 9th place, while the French National Centre for Scientific Research (CNRS) dropped out of the top 10 for the first time, ranking at 13th place. Harvard University recorded an 18% drop in adjusted Share, although it held 2nd place, but Stanford University and MIT both fell in ranking  – Stanford from 15th to 16th place, and MIT from 14th to 17th  place. The US National Institutes of Health also saw a change, falling out of the top 20 to 24th place.

All data and analysis for this year’s Research Leaders can be found here.  The data is based on full year 2024 data.

Note:  Nature Index recognises that many other factors must be taken into account when considering research quality and institutional performance; Nature Index metrics alone should not be used to assess institutions or individuals. Nature Index data and methods are transparent and available under a creative commons licence at natureindex.com

Digital Science makes industry-university collaboration easier with new Dimensions Industry Partnerships

Digital Science has announced its new Dimensions Industry Partnerships dashboard, offering world-leading support for research institutions looking to boost their industry collaboration and commercialization of research.

This new dashboard is aimed at innovation, tech transfer, and economic development teams at universities, who need the kinds of insights only Dimensions can provide to inform their strategy and outreach.

Built on Dimensions – the world’s largest interconnected global research database – the dashboard offers rich, shareable visualizations and unique metrics, such as “corporate proximity” to reflect past industry engagement.

With Dimensions Industry Partnerships, institutions can:

  • Quickly identify faculty members with proven industry engagement
  • Discover IP citations, funding patterns, and researcher networks
  • Surface high-potential collaborators and commercialization opportunities
  • Benchmark institutional strengths against peers
  • Save hours of manual data gathering with intuitive, shareable insights

Digital Science’s Executive Vice President of Academic Markets, Jonathan Breeze, says: “Universities are looking to accelerate the impact of their research and drive more effective industry collaborations, but they need to do so in smarter, faster, easier ways. We now have the right solution for them with Dimensions Industry Partnerships.

“Such collaborations offer many benefits for both universities and industry alike, with research and innovation being critical levers for economic growth and stability. Digital Science is excited to be playing a unique role in this, enabling institutions to conduct a strategic, data-driven approach to their tech transfer and commercialization activities.”

Discover more about Dimensions Industry Partnerships and request a demo todayhttps://www.dimensions.ai/products/all-products/dimensions-industry-partnerships/

Gender pay gaps persist among leading science publishers

Findings indicate how women are being undervalued by their employers, argue journal editors

Despite promises to close the gender gap, leading science publishers have maintained large and persistent gender pay gaps favouring men since 2017, finds an analysis of eight years of data, published by PLOS Global Public Health today.

Conducted by Jocalyn Clark, International Editor at The BMJ, and Elizabeth Zuccala, Senior Deputy Medical Editor at the Medical Journal of Australia, it shows that every science publisher pays men more than women, and that Elsevier remains an outlier in the magnitude of its gender pay gap and in the lack of progress.

The findings are based on data from 2017 to 2024 for the five largest science publishers in the UK that publish journals including Nature, Science, Cell, and The Lancet. For additional comparison, they collected data for the publisher and owner of leading UK-based medical journal, The BMJ, and the largest UK-based science funder, Wellcome.

Eight years ago Elsevier stood out among publishers, with a median pay gap in 2017 of 40.4% in favour of men over women in its UK business, they explain. The UK average that year was 18.4%.

Yet despite the company’s leadership promising change, Elsevier’s median pay gap for 2024 is 32.8%, maintaining its position as worst performer among peers over all eight years of mandatory reporting, and tracking only a slight improvement of seven percentage points over time. In fact, the ratio of Elsevier’s pay gap to the UK average has worsened – from 2.2 times in 2017 and 2.4 in 2020 and 2021 to now 2.9 times the UK average in 2024.

In contrast, peer companies such as Springer-Nature and Wiley have decreased their pay gaps by 26% (to 9.5%) and by 18% (to 17.7%), respectively, and BMJ has never reported a gender pay gap over 12%. Wellcome reports for 2024 a pay gap of 15.7%, down from a high of 20.8% for 2017.

Clark and Zuccala argue that gender pay gaps at science and health publishers – especially persistent gaps – defy all manner of commitments to equity, diversity, and inclusion (EDI) and gender equality, and they call for stronger demands and measures for all leading scientific organisations and publishers to account for and address this problem. 

They also point out that the problem here is not a lack of qualified or motivated women to occupy and excel in senior editorial and publishing roles. It is the responsibility of publishers to provide conducive environments for women’s career advancement, free of bias that limits their access to upward mobility and higher pay. 

“Principles of EDI and gender equality are fundamental to doing good science – they should be bedrocks of publishing that science too,” they conclude.