Taylor & Francis Clarifies the Responsible use of AI Tools in Academic Content Creation

    The use of artificial intelligence (AI) tools in research and writing is an evolving practice. AI-based tools and technologies include but are not limited to large language models (LLMs), generative AI, and chatbots (for example, ChatGPT). Below we restate our guidance on author accountability and responsibilities as it relates to the use of AI tools in content creation. This policy will be iterated as appropriate.

    Taylor & Francis recognizes the increased use of AI tools in academic research. As the world’s leading publisher of human-centered science, we consider that such tools, where used appropriately and responsibly, have the potential to augment research outputs and thus foster progress through knowledge.

    Authors are accountable for the originality, validity and integrity of the content of their submissions. In choosing to use AI tools, authors are expected to do so responsibly and in accordance with our editorial policies on authorship and principles of publishing ethics.

    Authorship requires taking accountability for content, consenting to publication via an author publishing agreement, giving contractual assurances about the integrity of the work, among other principles. These are uniquely human responsibilities that cannot be undertaken by AI tools.

    Therefore, AI tools must not be listed as an author. Authors must, however, acknowledge all sources and contributors included in their work. Where AI tools are used, such use must be acknowledged and documented appropriately.