Home Blog Page 2

IEEE Journals Lead the Field in the Latest Citation Rankings

IEEE, the world’s largest technical professional organization advancing technology for humanity, announced today that its journals once again excelled in the journal citation rankings according to the latest Journal Citation Reports™ (JCR) from Clarivate Analytics and CiteScore™ metrics by Scopus, both released in June 2025. The latest results demonstrate a collective and consistent high-level of performance across IEEE publications in a wide range of technologies in both open access and hybrid journals. In addition, several IEEE journals were ranked at the top of their respective fields. 

Journal Rankings by Impact Factor

Journal Impact Factor™ (JIF) is widely used by many in the technical community to compare the impact of scholarly research journals. Journal Impact Factor calculates the average number of times articles from a journal published in the past two years have been cited in the JCR year. In the most recent JCR, a wide range of IEEE publications were among the most-cited journals in multiple categories:

  • 24 of the top 30 journals in Electrical & Electronic Engineering
  • 23 of the top 30 journals in Telecommunications
  • 9 of the top 20 journals in Computer Science, Artificial Intelligence
  • 5 of the top 10 journals in Computer Science, Information Systems
  • 3 of the top 5 journals in Imaging Science
  • 3 of the top 5 journals in Automation and Control Systems
  • 3 of the top 5 journals in Computer Science, Cybernetics
  • 3 of the top 5 journals in Computer Science, Hardware & Architecture

Top Ranked Journals in Several JCR Categories

The following are examples of IEEE journals that ranked highly in their respective categories:

  • IEEE Communications Surveys and Tutorials – #1 in Telecommunications, #1 in Computer Science, Information Systems (JIF: 46.7)
  • IEEE Geoscience and Remote Sensing Magazine – #1 in Imaging Science, #1 in Remote Sensing (JIF 16.4)
  • IEEE/CAA Journal of Automatica Sinica – #1 in Automation and Control (JIF: 19.2)
  • IEEE Transactions on Cybernetics – #1 in Computer Science, Cybernetics (JIF: 10.5)
  • Proceedings of the IEEE – # 2 in Electrical Engineering (JIF: 25.9)
  • IEEE Transactions on Pattern Analysis and Machine Intelligence – #3 in Computer Science, Artificial Intelligence; #3 in Electrical Engineering (JIF 18.6)
  • IEEE Journal on Selected Areas in Communication – #2 in Telecommunications (JIF 17.2)
  • IEEE Wireless Communications – #2 in Computer Science, Hardware and Architecture; #3 in Telecommunications (JIF: 11.5)

Journal Rankings by CiteScore

CiteScore is a widely accepted citation metric developed from Elsevier’s Scopus data. The 2024 CiteScore divides the total number of citations to peer-reviewed articles in a journal in 2021–2024 by the total number of peer-reviewed articles published in the journal over the same period. In the most recent CiteScore report by Scopus, IEEE publications were consistently listed among the most-cited journals in multiple categories:

  • 8 of the top 20 publications in Electrical Engineering
  • 9 of the top 20 publications in Computer Software
  • 8 of top 20 publications in Computer Hardware
  • 7 of the top 20 publications in Signal Processing
  • 4 of the top 5 publications in Computational Theory and Mathematics
  • 3 of the top 5 publications in Automotive Engineering
  • 2 of the top 5 publications in Applied Mathematics
  • 2 of the top 5 publications in Control & Systems Engineering
  • 2 of the top 5 publications in Computer Networks and Communications

Top Ranked Journals in Several CiteScore Categories

Following are examples of IEEE journals that ranked highly in their respective categories:

  • IEEE Communications Surveys and Tutorials – #1/970 in Electrical and Electronic Engineering (CiteScore: 86.2)
  • Proceedings of the IEEE – #1/239 in General Computer Science, #2/970 in Electrical and Electronic Engineering (CiteScore 71.1)
  • IEEE Transactions on Pattern Analysis and Machine Intelligence – #1/665 title in Applied Mathematics, #1/197 in Computational Theory and Mathematics (CiteScore: 35.0)
  • IEEE Signal Processing Magazine – #4/665 in Applied Mathematics, #4/183 in Signal Processing (CiteScore: 20.5)
  • IEEE Transactions on Evolutionary Computation – #2/197 in Computational Theory and Mathematics, #3/136 in Theoretical Computer Science (CiteScore 23.5)
  • IEEE Transactions on Intelligent Transportation Systems – #2/133 in Automotive Engineering (CiteScore: 17.8)
  • IEEE Geoscience and Remote Sensing Magazine – #1/198 in Earth and Planetary Sciences, #2/174 in Instrumentation (CiteScore: 27.1)
  • IEEE Journal on Selected Areas in Communications – #4/507 in Computer Networks and Communications (CiteScore: 33.6)

“The journal citation rankings are one of several important methods scientists and research professionals use to assess the quality of journals when they are considering where to publish their research,” says W. Clem Karl, IEEE Vice President-Publication Services and Products. “The latest results from Clarivate and Scopus underscore IEEE’s dedication to the integrity of the scientific record and publishing high-quality, breakthrough research that our users can trust. Another key factor in determining where to publish is readership and IEEE disseminates these discoveries to a broad audience of over 10 million monthly users of the IEEE Xplore digital library. I would like to thank all of our many authors, reviewers and editors for their important contributions to help IEEE continue to publish the highest quality and most vital information in the field.”

First Impact Factors for New Fully Open Access Journals

IEEE also announced that five more of IEEE’s recently launched fully open access journals were accepted for indexing by Clarivate and received their first Journal Impact Factors. The 

following journals were all awarded their first Journal Impact Factors and accepted into the Web of Science Core Collection™ in 2025: 

  • IEEE Open Journal of Control Systems
  • IEEE Open Journal of Instrumentation and Measurement
  • IEEE Open Journal of the Solid-State Circuits Society
  • IEEE Open Journal of Ultrasonics, Ferroelectrics, and Frequency Control
  • IEEE Transactions on Quantum Engineering

This is in addition to 14 other IEEE fully open access journals that received their first Journal Impact Factors in 2023 and 2024.

Additional Journal Bibliometrics 

IEEE also monitors other common bibliometric journal measurements such as Article Influence® Score and Eigenfactor®. Although calculated differently than Impact Factor and CiteScore, IEEE journals rank highly in those additional citation measurements as well:

  • IEEE Access, IEEE’s largest fully open access publication, is ranked as the No. 1 journal by Eigenfactor in Electrical Engineering and Telecommunications. IEEE has 8 of the top 10 journals in Electrical Engineering by Eigenfactor Score.
  • IEEE Transactions on Pattern Analysis and Machine Intelligence was ranked as the No. 1 journal by Eignefactor in Artificial Intelligence. This was followed by IEEE Transactions on Neural Networks and Learning SystemsIEEE Transactions on Cybernetics and IEEE Transactions on Image Processing, respectively, taking the top 4 of the top 5 spots in the AI category.
  • IEEE publishes 8 of the top 10 journals in Electrical Engineering and 9 of the top 10 journals in Telecommunications as calculated by Article Influence Score.

For more detailed information on IEEE rankings and how each bibliometric measurement is calculated, please see the full results.

Digital Science relaunches Scientometric Researcher Access to Data (SRAD) program

Digital Science today reaffirms its commitment to supporting the global scientometric research community and the study of scholarly literature, by relaunching its Scientometric Researcher Access to Data (SRAD) program.

This revitalized initiative will offer scientometric researchers streamlined, no-cost access to Digital Science’s Altmetric and Dimensions data, and is now further expanded by offering access to Dimensions on BigQuery.

The SRAD program is available to scientometrics researchers involved in non-commercial scientometric studies, empowering them to more easily answer system-wide research questions about scholarly literature and its impact.

To lead this important effort and build a thriving global community of expert users, Digital Science has appointed Kathryn Weber-Boer to the position of Director Scientometrics – Scientometric Researcher Engagement. Ms Weber-Boer brings deep expertise in scientometrics, academic engagement, and advanced analytics. 

Ms Weber-Boer said: “This program plays an important role in Digital Science’s commitment to open research and improving research. I am honoured to be in the position of driving strategic outreach, program design, and community leadership, to help researchers maximize the impact of Digital Science tools.

“By expanding access to Dimensions on GBQ, we’re excited to enable scientometrics researchers to answer complex questions with big data, exploring and linking more datapoints, and connecting our world-leading Dimensions data to other open datasets.

“The SRAD program is built around key principles of accessibility, responsible data use, and community empowerment. Through tailored training and dynamic community engagement, it’s our hope that we can contribute to driving innovation in the fields of Scientometrics, Research Policy, and Innovation Studies,” she said.

Signals Launches Sleuth AI

Introducing Sleuth AI:  Signals’ AI-powered research integrity assistant 

We’re excited to launch the next generation of research integrity evaluation. Sleuth AI transforms how editorial and research integrity teams investigate manuscripts — turning hours of manual analysis into minutes of intelligent, interactive exploration. Ask questions, uncover evidence, detect potential issues — all with the transparency and explainability that research integrity decisions demand.

Designed to fit seamlessly into editorial workflows, Sleuth AI is integrated directly into Signals Manuscript Checks — providing a secure, private, interactive assistant.

Sleuth AI sits alongside Signals evaluation as part of Signals Manuscript Checks

Signals’ approach to AI and research integrity

AI is rapidly disrupting every part of the research ecosystem — from discovery to publication. This brings new integrity risks for publishers, including convincing fabricated content. 

Since Signals launched, we’ve used AI to scale our operations — for example, extracting references from manuscripts. However, we chose not to take an AI-first approach to manuscript analysis. Why? Signals’ approach has always been about transparency and explainability. The stakes are high in research integrity evaluations: for authors, a false positive can result in frustrating publication delays or rejections that affect careers; for publishers, making the wrong decision can lead to a future retraction and reputational damage. For these reasons, typical black-box AI approaches that lack transparency are not good enough. Sleuth AI is designed with transparency in mind.

Sleuth AI identifies an irrelevant section of a manuscript

What Sleuth AI does today

When it comes to research integrity evaluations, publishing teams need support to:

  • Automate time-consuming manual checks so that individuals can focus on other priorities.
  • Introduce new checks that accurately identify issues in manuscripts.
  • Identify evidence that enables quick decision-making.     

Sleuth AI supports all of these goals, helping editorial and integrity teams catch problematic articles before they waste valuable staff and peer reviewer time. It also adds something new — personalisation. Integrity teams can now experiment with new investigative checks, enabling them to adapt as the integrity landscape continues to evolve.

Built to provide reliable evidence directly from the manuscript, Sleuth AI is instructed to stick to what it knows. When it can’t provide an answer or lacks information, it will say so — helping minimise hallucinations.

Here are a few examples of what Sleuth AI can do today — these are provided as pre-defined prompts in the first release: 

  • Increase the efficiency of editorial checks by extracting and highlighting COI, funding, and ethics statements, and flag anything unusual. 
  • Provide evidence of citation issues by showing users exactly where in a manuscript potential citation misconduct occurs.
  • Add new exploratory analysis that detects irrelevant or inconsistent text that could indicate low-quality or AI-generated content. 

Sleuth AI extracts disclosure statements from manuscripts and flags a potential issue

What’s Next for Sleuth AI

We’re most excited about the future of Sleuth AI. 

Sleuth AI enables us to rapidly experiment with new ways to investigate articles. As effective methods emerge, we’ll convert them into standard signals, constantly improving the evaluations we provide to the scholarly community.

In the near term, Sleuth AI will be enhanced with the Signals Data Graph which combines networked analysis of the world’s publications with expert knowledge. This will give users deeper insight into individual signals — for example, Sleuth AI will be able to explain how a specific retracted reference affects an article’s conclusion.

Looking further ahead, we’ll create contextual and deep evaluations of any article, and plan to roll out Sleuth AI more broadly. From researchers and clinicians to policymakers and the public, everyone engaging with research needs tools to understand what’s credible, relevant, and worth their attention.

Sleuth AI is available now to all publishers using Signals Premium Manuscript Checks via ScholarOne integration and Direct Upload

Building Trust in Academic Publishing: Enago Reports adds crucial Research Integrity markers

Enago Reports, trusted and used by publishers, scholarly societies, academic institutions and independent researchers is excited to announce a significant addition of advanced research integrity checks across research authenticity, citation reliability and content quality. These new checks have been added to an existing suite of manuscript screening checks for content integrity, language quality, technical compliance, AI-powered proofreading and plagiarism detection.

The rise of paper mills, poor-quality research and AI-generated content continues to threaten the credibility of academic research. This new release of Enago Reports will be used as submission assistant for researchers to ensure submission-guideline compliance, as well as an effective tool for editors and peer reviewers to assess content integrity and identify potential issues.

The new enhancements focus on three main areas:

Content Authenticity: Detect templated or AI-generated text by flagging tortured phrasing, repetitive language, and incoherent equations. 

Citation Reliability: Trace references to retracted studies or predatory journals to ensure each manuscript is built on a reliable scientific foundation. 

Content Integrity: Identify red flags in biomedical papers, assess funding claims , and check other unverifiable sources such as problematic cell lines or lack of original content.

The technical check suite available under Enago Reports now consists of 200+ checks, including 40+research integrity checks and 10+ image checks, further enhanced by the use of agentic AI, machine learning models, and advanced semantic technologies. 

Springer Nature expands its portfolio of research integrity tools to detect non-standard phrases  

Springer Nature has launched a new tool for use across submissions to its journals and books to detect non-standard phrases in submitted manuscripts, marking the latest step in its ongoing mission to uphold research integrity and safeguard the scholarly record.  

The tool works by detecting unusual phrases that have been awkwardly constructed or are excessively convoluted, for example ‘counterfeit consciousness’ instead of ‘artificial intelligence’. Such phrases are indicators that authors have used paraphrasing tools to evade plagiarism detection. If a number of non-standard phrases are identified by the tool, the submission will be withdrawn. 

The tool has been developed using the public tortured phrases catalogue of theProblematic Paper Screener (PPS) created by Guillaume Cabanac, Cyril Labbé and Alexander Magazinov and has undergone multiple rounds of testing and validation to provide a reliable assessment of submissions across academic disciplines. 

Tamara Welschot, Head of Research Integrity, Prevention at Springer Nature, commented 

“Fake research is a challenge that affects all of us in the publishing industry and we all need to work together to combat it. Developing this tool has been a long-running project involving close collaboration between the research integrity group and several technology teams at Springer Nature, building upon important work by integrity sleuths from the academic community. We thank Cabanac, Labbé and Magazinov for their efforts in developing the Problematic Paper Screener and highlighting papers containing tortured phrases to the wider publishing community. Our tool identifies these problematic papers at submission, preventing them from being published and saving the editors and reviewers’valuable time.”

The non-standard phrases detector tool is the newest addition to Springer Nature’s suite of research integrity solutions and complements existing tools; a nonsense text detector,  Snappshot (which identifies duplicate or manipulated images), and an irrelevant reference checker tool.  These tools have been developed in-house as part of Springer Nature’s ongoing commitment to ensure the integrity of the work it publishes. This commitment includes investment in a rapidly growing, expert team and ongoing technology development. Springer Nature is also committed to collaborating with the wider publishing community, as a contributing organisation in the STM Integrity Hub, which facilitates knowledge and data exchange and develops shared technology tools, and to which Springer Nature has donated its nonsense text detector for use across the sector. 

Digital Science to strengthen research integrity in publishing with new Dimensions Author Check API

Scholarly publishers can now fully integrate research integrity checks into their editorial and submission workflows, thanks to Digital Science’s new Dimensions Author Check API, which launches today.

Built on Dimensions – the world’s largest interconnected global research database – Dimensions Author Check evaluates researchers’ publication and collaboration histories within seconds, delivering reliable, concise, structured insights.

For the first time, the new Dimensions Author Check API enables publishers to embed this functionality directly into their own workflows, without the need to switch to an outside platform.

Dr Leslie McIntosh, Vice President of Research Integrity at Digital Science, said Dimensions Author Check API is designed to support consistent and confident editorial decision-making.

“By highlighting key indicators of research integrity – such as retractions, tortured phrases, or unusual co-authorship patterns – the Dimensions Author Check API helps to rapidly identify potential issues for concern. These include continuously improving indicators that will identify paper mills and increase trust in science,” Dr McIntosh said.

“Importantly, the Author Check API can do this at scale, giving publishers the ability to screen multiple researchers per request. This makes it ideal for high-volume manuscript processing and broader editorial oversight.”

Key benefits of the new Dimensions Author Check API include:

  • Seamless integration: A standards-based RESTful API designed for easy deployment within publishers’ internal systems or third-party platforms.
  • Actionable insights: Clear summaries highlighting key aspects of researchers’ publication and collaboration histories.
  • Operational efficiency: Reducing editorial workload while enhancing the quality and consistency of integrity assessments.
  • Support for transparency and trust: Surfacing critical integrity information at key decision points, strengthening publishers’ ability to adhere to ethical standards.

For more information or to explore integration options, please contact the Digital Science Publisher Team.

NIH to crack down on excessive publisher fees for publicly funded research

As part of its ongoing commitment to scientific transparency and responsible stewardship of taxpayer dollars, the National Institutes of Health (NIH) today announced plans to implement a new policy that will cap how much publishers can charge NIH-supported scientists to make their research findings publicly accessible. This initiative reflects a broader effort to restore public trust in public health by promoting open, honest, and transparent scientific communication.

“Creating an open, honest, and transparent research atmosphere is a key part of restoring public trust in public health,” said NIH Director Dr. Jay Bhattacharya. “This reform will make science accessible not only to the public but also to the broader scientific community, while ending perverse incentives that don’t benefit taxpayers.”

The current landscape of scholarly publishing presents growing challenges. Some major publishers charge as much as $13,000 per article for immediate open access, while also collecting substantial subscription fees from government agencies. For example, one publishing group reportedly receives more than $2 million annually in subscription fees from NIH, in addition to tens of millions more through exclusive article processing charges (APCs). These costs ultimately burden taxpayers who have already funded the underlying research.

To address this imbalance, NIH will introduce a cap on allowable publication costs starting in Fiscal Year (FY) 2026, ensuring that publication fees remain reasonable across the research ecosystem. The policy aims to curb excessive APCs and ensure the broad dissemination of research findings without unnecessary financial barriers.

This reform builds on NIH’s long-standing commitment to open science and public access, as demonstrated by initiatives such as:

  • The NIH Public Access Policy, which ensures that peer-reviewed publications resulting from NIH funding are made freely available to the public without embargo.
  • The NIH Data Management and Sharing Policy, which promotes the timely sharing of scientific data regardless of publication status.
  • The NIH Research Portfolio Online Reporting Tools (RePORT), which provide public insight into NIH-funded research activities, expenditures, and results.
  • The NIH Intramural Access Policy, which encourages broader use of NIH-developed technologies through licensing strategies that enhance patient and public access.

“This policy marks a critical step in protecting the integrity of the scientific publishing system while ensuring that public investments in research deliver maximum public benefit,” Dr. Bhattacharya said.

About the National Institutes of Health (NIH): NIH, the nation’s medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit www.nih.gov.

A United Call to Protect the Future of Research

STM has joined the Association of College and Research Libraries, the Association of Research Libraries, the Association of University Presses, and the Society for Scholarly Publishing in a unified call to safeguard the future of research.

Published in The Scholarly Kitchenour joint op-ed responds to escalating uncertainty in the United States, including sweeping federal funding cuts and reductions in the federal research workforce, with a shared statement of values and intent.

While this statement centers on the U.S. context, the values at its core—trust, continuity, and scholarly independence—are essential across the global research ecosystem.

Though our sectors bring different perspectives, we are aligned in our belief that the infrastructure supporting discovery, scholarship, and access to knowledge must be protected.

This piece marks a first step—and signals a broader commitment to collaboration and action.

Read the op-ed: Trust and Integrity: A Research Imperative

LIBER Launches a Taskforce on Artificial Intelligence

LIBER is launching a dedicated taskforce to investigate Artificial Intelligence (AI) developments across the research library field, strengthen its strategic approach to AI-related matters, and provide hands-on guidance for librarians to take on the leading role in AI literacy.  

AI is seen as a disruptive force for research and higher education. Addressing the rising challenges, the newly established LIBER AI Taskforce was created out of the pressing need and ambition to explore and implement an agenda on AI within the current LIBER Strategy 2023-2027.  The Taskforce was formally launched during the LIBER Strategy Update Session at the LIBER Annual Conference 2025 in Lausanne, Switzerland, with an address delivered by LIBER Vice-President Dr Giannis Tsakonas. It was subsequently presented in the LIBER Spotlight Session by Sara Kjellberg, Director at Malmö University Library and Chair of the LIBER AI Taskforce.

An important contribution of libraries lies in supporting informed and reflective use rather than uncritical adoption, while upholding core values such as transparency and reliability. Notably, AI occurrences are already investigated, and addressed across LIBER – whether it is from the perspective of copyright, publishing, research data management, or other – each Working Group provides new insights to AI matters from their domain and field of expertise. For example, the LIBER Copyright and Legal Matters Working Group hosted a webinar on the EU AI Act earlier this year, dissecting the legal terminology and exploring real-case library examples for AI guidelines. The critical step that the AI Taskforce takes is to bridge all these investigations while formulating a strategic layer.

The characteristics of this Taskforce are: 

  • To speak on AI and research libraries’ matters at the European and global levels. 
  • To collaborate with LIBER members and partnering organisations on AI and research libraries’ matters. 
  • To operate as an umbrella to connect existing activities on AI in LIBER Working Groups and implement additional tactical and operational activities. 

The initiative will operate alongside LIBER Steering Committees, with the vision of embedding AI in the next LIBER Strategy after 2027. Enabling a concerted approach, the Taskforce will collaborate with LIBER Working Groups and the LIBER Quarterly with the aim of developing a work plan for the coming 2 years. This next phase will be led by Karin Rydving, Assistant Director at Oslo University Library and a member of the LIBER AI Taskforce. 

We look forward to sharing further updates and engaging the broader LIBER community in the AI investigation and activities that follow. 

67 Bricks and Bone & Joint shortlisted for ALPSP Innovation award

67 Bricks and Bone & Joint have been shortlisted for this year’s ALPSP Innovation Award in Publishing for their AI-generated podcast series ‘AI Talks with Bone & Joint’.

The tool seamlessly converts static research PDFs into lively and engaging podcasts, accurately summarising a paper’s findings. The podcasts give the Bone & Joint community a new way to keep up-to-date with the latest articles while multitasking, enabling them to discover which content they may wish to dive into further and in more detail.

The tool was developed by 67 Bricks in collaboration with Bone & Joint in a matter of weeks, and has significantly reduced the time and effort from Bone & Joint staff to scale up podcast production, supporting the ongoing needs of their time-poor users.

The podcast is presented by two AI-generated hosts, who talk conversationally through the research findings in a single paper. The tools’ UI has been built to give the Bone & Joint team oversight of the input and generated content, keeping a human ‘in-the-loop’ and safeguarding the content’s accuracy and integrity.

Jennifer Schivas, CEO of 67 Bricks, commented ‘It’s fantastic to be recognised for our work with Bone & Joint. We see so much opportunity for innovative applications of AI and other technology to widen research dissemination, and these AI-generated podcasts are a fantastic example of that.’

Emma Vodden, Director of Publishing & Innovation at Bone & Joint, added ‘We are delighted with the response we have had from our community to this initiative, which builds upon the knowledge translation programme we have for our journals. These summaries provide a short, easy-to-access update for busy surgeons who are looking to consume content on the go. Bone & Joint is committed to finding ethical and useful ways to integrate AI in our workflows whilst maintaining the integrity of the content.’

The winners of the ALPSP awards will be announced at their annual conference in September.

MDPI Signs First North American Agreement with Canadian Consortium

The Multidisciplinary Digital Publishing Institute (MDPI) has signed its first consortium agreement in North America, marking a significant milestone for the fully Open Access publisher. The agreement is with the Federal Science Libraries Network (FSLN) and will last for two years. Five Canadian federal agencies will gain access to MDPI’s Institutional Open Access Program (IOAP), including significant discounts on article processing charges for affiliated scholars across its portfolio of 475 journals.

By partnering with some of Canada’s largest science-based departments, MDPI solidifies its commitment to advancing Open Science across all continents. Participating institutions currently include Agriculture and Agri-Food Canada, Environment and Climate Change Canada, Health Canada, National Research Council Canada, and Natural Resources Canada.

“The Open Science landscape in Canada is rapidly evolving, with the Tri-Agency Open Access Policy set for renewal by the end of 2025. This reflects ongoing efforts to foster greater scientific transparency and accessibility at a national policy level,” says Ryan Siu, Institutional Partnerships Manager at MDPI. “Our new agreement with FSLN represents our shared commitment to further these efforts and foster wider readership. By aligning with these initiatives, we make progress towards research that’s both inclusive and impactful, benefiting local and global communities alike.”

Silverchair Transforms Author Experience with ScholarOne Gateway

Silverchair has launched the pilot of ScholarOne Gateway, a new centralized hub that will revolutionize the way authors interact with publishers during the submission and peer review process. ScholarOne Gateway addresses critical pain points in the peer review process by creating a unified, modern experience that connects authors to a publisher’s entire journal portfolio through a single, intuitive interface.

With longstanding software like ScholarOne, the user experience can be fragmented for users, leading to frustration and confusion. ScholarOne Gateway directly tackles these challenges, offering a seamless experience for users within a publisher’s portfolio: submit, review, and manage work across multiple journals with ease.

“Researchers today expect the same level of user experience from academic platforms that they receive from consumer technology,” said Josh Dahl, SVP of Product and General Manager of ScholarOne at Silverchair. “ScholarOne Gateway represents a fundamental shift toward modern, researcher-centric design that eliminates the friction and confusion that has plagued peer review. This is the first of many such improvements we have in development for ScholarOne.”

Built on extensive user and customer feedback, ScholarOne Gateway serves as a comprehensive one-stop shop for authors and reviewers. For publishers, ScholarOne Gateway addresses critical operational challenges by enabling better management of researcher experiences across their entire ecosystem.

The new interface incorporates best-in-class workflows designed to encourage and streamline submissions while building upon ScholarOne’s proven submission experience. Despite operating as a brand new, separate application, ScholarOne Gateway integrates seamlessly with and operates in sync with the legacy ScholarOne Manuscripts system, ensuring continuity for existing users.

Select pilots for ScholarOne Gateway have launched with key development partners who are committed to meeting the changing needs of the research community. As the development of ScholarOne Gateway progresses, the focus will continue to be on addressing user frustrations and publisher operational challenges.

To see more of ScholarOne Gateway’s functionality, watch a short video here.

The pilot phase will expand to include additional development partners and incorporate user feedback, with plans for a full launch anticipated by the end of the year.