DOWNLOAD TEMPLATE
Artificial Intelligence (AI) Policy
Introduction
Tahta Media Group (TMG) recognises the value of artificial intelligence (AI) and its potential to help authors in the research and writing process. Tahta Media Group (TMG) welcomes developments in this area to enhance opportunities for generating ideas, accelerating research discovery, synthesising or analysing findings, polishing language, or structuring a submission.
Generative AI offers opportunities for acceleration in research and its dissemination. While these opportunities can be transformative, they are unable to replicate human creative and critical thinking. Tahta Media Group (TMG) policy on the use of AI technology has been developed to assist authors, reviewers, and editors in making good judgments about the ethical use of such technology.
These policies have been triggered by the rise of generative AI and AI-assisted technologies, which are expected to increasingly be used by content creators. These policies aim to provide greater transparency and guidance to authors, reviewers, editors, readers, and contributors. Tahta Media Group (TMG) will monitor this development and will adjust or refine policies when appropriate. The Artificial Intelligence Policy is based on Elsevier's Code of Ethics and Guidelines.
For reviewers
The use of generative AI and AI-assisted technologies in the journal peer review process
When a researcher is invited to review another researcher’s paper, the manuscript must be treated as a confidential document. Reviewers should not upload a submitted manuscript or any part of it into a generative AI tool, as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.
This confidentiality requirement extends to the peer review report, as it may contain confidential information about the manuscript and/or the authors. For this reason, reviewers should not upload their peer review reports into an AI tool, even if it is just for the purpose of improving language and readability.
Peer review is at the heart of the scientific ecosystem, and Elsevier abides by the highest standards of integrity in this process. Reviewing a scientific manuscript implies responsibilities that can only be attributed to humans. Generative AI or AI-assisted technologies should not be used by reviewers to assist in the scientific review of a paper, as the critical thinking and original assessment needed for peer review are outside of the scope of this technology, and there is a risk that the technology will generate incorrect, incomplete, or biased conclusions about the manuscript. The reviewer is responsible and accountable for the content of the review report.
Elsevier’s AI author policy states that authors are allowed to use generative AI and AI-assisted technologies in the writing process before submission, but only to improve the language and readability of their paper and with the appropriate disclosure. Reviewers can find such disclosure at the bottom of the paper in a separate section before the list of references.
Please note that Elsevier owns identity-protected AI-assisted technologies that conform to the RELX Responsible AI Principles, such as those used during the screening process to conduct completeness and plagiarism checks and identify suitable reviewers. These in-house or licensed technologies respect the author's confidentiality. Our programs are subject to rigorous evaluation of bias and are compliant with data privacy and data security requirements.
Elsevier embraces new AI-driven technologies that support reviewers and editors in the editorial process, and we continue to develop and adopt in-house or licensed technologies that respect authors’, reviewers’, and editors’ confidentiality and data privacy rights.
For editors
The use of generative AI and AI-assisted technologies in the journal editorial process
A submitted manuscript must be treated as a confidential document. Editors should not upload a submitted manuscript or any part of it into a generative AI tool, as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.
This confidentiality requirement extends to all communication about the manuscript, including any notification or decision letters, as they may contain confidential information about the manuscript and/or the authors. For this reason, editors should not upload their letters into an AI tool, even if it is just for the purpose of improving language and readability.
Peer review is at the heart of the scientific ecosystem, and Elsevier abides by the highest standards of integrity in this process. Managing the editorial evaluation of a scientific manuscript implies responsibilities that can only be attributed to humans. Generative AI or AI-assisted technologies should not be used by editors to assist in the evaluation or decision-making process of a manuscript, as the critical thinking and original assessment needed for this work are outside of the scope of this technology, and there is a risk that the technology will generate incorrect, incomplete, or biased conclusions about the manuscript. The editor is responsible and accountable for the editorial process, the final decision, and the communication thereof to the authors.
Elsevier’s AI author policy and JPN Publication Ethics Policy state that authors are allowed to use generative AI and AI-assisted technologies in the writing process before submission, but only to improve the language and readability of their paper and with the appropriate disclosure. Editors can find such disclosure at the bottom of the paper in a separate section before the list of references. If an editor suspects that an author or a reviewer has violated our AI policies, they should inform the publisher.
Please note that Elsevier owns identity-protected AI-assisted technologies that conform to the RELX Responsible AI Principles, such as those used during the screening process to conduct completeness and plagiarism checks and identify suitable reviewers. These in-house or licensed technologies respect the author's confidentiality. Our programs are subject to rigorous evaluation of bias and are compliant with data privacy and data security requirements.
Elsevier embraces new AI-driven technologies that support reviewers and editors in the editorial process, and we continue to develop and adopt in-house or licensed technologies that respect authors’, reviewers’, and editors’ confidentiality and data privacy rights.