Policy on the Use of Generative AI and AI-Assisted Technologies in the Journal Editorial Process 2024-06-18 In response to the rapid advancement and integration of generative AI and AI-assisted technologies, GAIA has developed this policy to enhance transparency and provide clear guidance for authors, reviewers, and the editorial team. Recognizing the dynamic nature of technological developments, our editorial team commits to vigilant monitoring and timely updates to this policy, ensuring its relevance and effectiveness. Editorial Board and Office Peer review is foundational to the integrity of the scientific process, which GAIA upholds rigorously. The editorial responsibilities for assessing scientific manuscripts are solely human functions due to the nuanced judgment they require. Therefore, GAIA prohibits the use of generative AI and AI-assisted technologies by editors in the manuscript evaluation or decision-making process. Such technologies do not possess the critical thinking and original assessment capabilities essential for this task and risk producing flawed or biased conclusions. The editor bears full responsibility for managing the editorial process, making the final decisions, and communicating them to the authors. Editors are responsible for ensuring that both authors and reviewers fully disclose their use of AI tools and comply with ethical standards. AI tools should not be employed in the manuscript evaluation or decision-making processes due to the inability of these technologies to perform critical editorial assessments. If AI tools are used to select reviewers, the editorial team must closely supervise this process and retain ultimate responsibility for editorial decisions. As AI technologies evolve, GAIA will regularly update its policies and guidelines to promote responsible and transparent use within scientific publishing. To improve article visibility in academic search engines, the editorial team is aligning article titles with Academic Search Engine Optimization (ASEO) principles. For this purpose, the GAIA editorial office may utilize "ChatGPT Team", a paid version of ChatGPT. This tool prioritizes confidentiality and data ownership by abstaining from utilizing entered data for model training. Authors Authors are required to disclose any use of AI tools, including generative AI and language models (LLMs), in the preparation of their manuscripts. This disclosure should include the tool’s name, version, and manufacturer and be placed in the manuscript's methods section or acknowledgements. The use of translation and grammar correction tools, such as DeepL Translator and Writefull, does not require declaration. However, manual verification of their outputs remains compulsory. Generative AI tools, including LLMs, must not be credited as authors. Accountability for the manuscript content rests solely with human authors. Authors must ensure that AI-generated content adheres to established publication ethics and guidelines, maintaining the integrity of the submitted work. Reviewers Reviewers should not use AI tools to write reviews or parts of reviews in order to prevent biased or inaccurate evaluations. If AI tools, including translation and grammar correction tools, are used at any stage of the review process, confidentiality must be maintained. Reviewers are also required to disclose the use of these AI tools to the journal, specifying which tools were used and how they were applied.
New issue GAIA 33/2(2024) September 7, 2024 This latest issue of GAIA features a case study on innovative design for sustainable mobility | and reflections on the German Ethics Council’s opinion on climate justice and on the narrating of biodiversity.
Policy on the Use of Generative AI and AI-Assisted Technologies in the Journal Editorial Process June 18, 2024 In response to the rapid advancement and integration of generative AI and AI-assisted technologies, GAIA has developed this policy to enhance transparency and provide clear guidance for authors, reviewers, and the editorial team.