The African Journal Partnership Program’s Guidance on Use of AI in Scholarly Publishing
Caradee Y. Wright1, Margaret Lartey2, Kenza Khomsi3, Frederico Peres4, Daniel Yilma5, James Kigera6, Annette Flanagin7, Ahia Gbakima8, David Ofori-Adjei9, Sumaili Kiswaya Ernest10, Siaka Sidibé11, Adégné Togo12, Adamson S. Muula13, Stefan Jansen14, Kato Njunwa15
1Chief Specialist Scientist, South African Medical Research Council and Deputy Editor, Journal of Health and Pollution
2Department of Medicine & Therapeutics, University of Ghana Medical School and Deputy Editor-in-Chief, Ghana Medical Journal
3Senior Researcher, General Directorate of Meteorology, Morocco and Deputy Editor, Journal of Health and Pollution
4Senior Researcher, Sergio Arouca National School of Public Health (ENSP/FIOCRUZ, Brazil) and Deputy Editor, Journal of Health and Pollution
5Department of Internal Medicine, Jimma University and Editor-in-Chief of Ethiopian Journal of Health Science
6Faculty of Health Sciences, University of Nairobi and Editor-in-Chief, Annals of African Surgery
7Executive Managing Editor and Vice President, Editorial Operations, JAMA and the JAMA Network; Co-Director, African Journal Partnership Program
8Editor-in-Chief, Sierra Leone Journal of Biomedical Research
9Editor-in-Chief, Ghana Medical; Co-Director, African Journal Partnership Program
10Deputy Editor-in-Chief, Annales Africaines de Médecine, University of Kinshasa, Kinshasa, Democratic Republic of the Congo
11Editor-in-Chief, Mali Medical
12Faculty of Medicine and Dental Surgery Bamako - USTTB, Editor Manager, Mali Medical
13Editor-in-Chief, Malawi Medical Journal, Blantyre, Malawi & Department of Community and Environmental Health, Kamuzu University of Health Sciences, Malawi
14Editor-in-Chief, Rwanda Journal of Medicine and Health Sciences, College of Medicine and health Sciences, University of Rwanda, Rwanda
15Deputy Editor-in-Chief, Rwanda Journal of Medicine and Health Sciences, College of Medicine and Health Sciences, University of Rwanda, Rwanda
Correspondences to: Caradee Y. Wright; email: caradee.wright@mrc.ac.za
Received: 26 May 2024; Revised: 27 Jul 2024; Accepted: 27 Jul 2024; Available online: 29 Jul 2024
Key words: AI, Scholary Publishing, AJPP
Ann Afr Surg. 2024; 21(3): 71-75
DOI: http://dx.doi.org/10.4314/aas.v21i3.1
Conflicts of Interest: None
Funding: None
© 2024 Author. This work is licensed under the Creative Commons Attribution 4.0 International License.
Introduction
The rapid introduction and evolution of artificial intelligence (AI), machine learning (ML), natural language processing (NLP), and large language models (LLMs) combined with the emergence of text-generating chatbots have ushered in a transformative era in scholarly publishing. See the Box for common terms and definitions. These technological advancements have the potential to streamline the research and publishing process, from automated content generation and language editing to improved content recommendations and data mining (Table 1).
A selection of trends and initiatives around the use of AI and LLM in scholarly publishing
AI, artificial intelligence; LLM, large language model.
While these innovations offer numerous benefits, they also present scholarly publishing with a range of critical issues that must be addressed (1). The use of LLMs and text-generating chatbots can inadvertently introduce bias, inaccuracies, and ethical concerns into scholarly content, requiring vigilant oversight to ensure the integrity and quality of published research and other content. In addition, the rapid pace of technological advancement demands that the scholarly publishing community establish guidance and best practices for the responsible use of AI in research and publication.
While developing such guidance, important principles should be considered such as transparency, responsibility, and accountability to ensure that the use of AI adheres to academic standards. Issues around data privacy, authorship attribution and accountability, intellectual property rights, and plagiarism detection all need careful consideration to safeguard the integrity and trustworthiness of research and publication.
The use of AI and LLMs in scholarly publishing is important to promote equity, address specific challenges and opportunities, and empower researchers and publishers to leverage such technologies while ensuring the responsible, inclusive, and ethical dissemination of knowledge. Access to advanced AI technologies and LLMs is not uniform across the world. Researchers and clinicians in low- and middle-income countries face a digital divide. Ensuring access to these technologies in scholarly publishing is crucial to prevent further disparities in knowledge creation and dissemination. Although not unique to Africa, there are several challenging issues to address via guidance on the use of AI and LLMs in scholarly publishing in African journals. Adequate data protection measures and best practices are critical to ensure data security. Guidance on how to protect sensitive data is critical particularly in Africa where data privacy regulations vary. Also, issues related to intellectual property, plagiarism, and the ownership of AI-generated content should be considered to protect the interests of researchers and institutions.
In Brazil, a recent study (2) conducted based on an exploratory content analysis raised some important questions on the implications of AI use in academic writing. It showed that AI technologies that generate texts in natural language, such as ChatGPT, are quite developed and increasingly accessible. These tools are becoming popular particularly among graduate students and young faculty for immediately and intuitively generating texts that are supposedly original texts. These trends are associated with the strong pressure to meet increasing academic productivity targets and result in an intensification of plagiarism cases, even when not detected by the most popular antiplagiarism tools, thereby posing new challenges to editorial groups (3, 4) and academic institutions, regarding the need to identify and curb AI-induced academic misconduct. Editors and reviewers will mostly not be able to disentangle what is human-generated or AI-generated knowledge, as the resulting text in a manuscript will often be the mixed result of both. This can bring challenges. For example, some authors, motivated by the professional incentives related to publishing articles, might be enticed to produce large amounts of AI-generated content, not all of which may be accurate, thereby potentially overwhelming editors, editorial boards, and reviewers with fact-checking.
Given these new opportunities and challenges, several journals and professional societies of editors have published guidance on AI in scholarly publishing (5-9). In light of these developments, the African Journal Partnership Program (AJPP) deemed it prudent to develop guidance on the use of AI, NLPs, and LLMs in scholarly publishing in their journals. AJPP editors and colleagues reviewed the Committee on Publication Ethics (COPE) guidance on authorship and AI tools (6), guidance from the World Association of Medical Editors (WAME) (7) and the International Committee of Medical Journal Editors (ICMJE) (8) as well as journals or publishers such as the JAMA Network journals (5) while preparing this guidance for AJPP journals, authors, and peer reviewers. Importantly, AI and AI-assisted technologies should only be used to improve readability and language of the work, and possibly as a “brain-storm partner” and not be used to carry out the work of the researcher(s) such as producing scientific insights, analyzing and interpreting data, or drawing scientific conclusions.
For Authors
The use of AI tools for manuscript preparation is permitted; however, authors remain ultimately answerable and accountable for all content in the manuscript, and authors should be entirely transparent on what AI tools they used and how they used them. Thus, authors should follow these recommendations:
-
AI tools must not be listed as authors because they do not meet authorship criteria and cannot be accountable for a published article.
-
Authors must disclose to journals at the time of manuscript submission if AI-assisted technologies (such as LLMs, text-generating chatbots, or image creators) were used to produce any of the content in the submitted work. This information can be included in the cover letter; some journals may also have a question about this in the online manuscript submission system.
-
Authors must also provide information in the manuscript on which AI tool was used, how it was used, which version was used, and the date on which it was used. It should be noted that incomplete reporting is considered as an equally grave offense as plagiarism.
o Authors must report fully on the use of AI to create, edit, or review any content or to assist with those tasks in the Acknowledgment section (the name of the AI tool, version number, dates of use, prompts entered, and what was done). As much detail as possible should be provided, such as which sections of the manuscript or other content contain AI-generated contributions, any and if any ideas were generated by AI, these should be described.
o If the use or testing of AI tools, models, or interventions is the focus of a study, a complete description should be provided in the methods or similar section of the manuscript (including the name of the AI tool, version number, dates used, what was done, and how any potential biases were identified and managed).
o For any section in the text for which AI was used, a clear disclaimer should be given at the start of that section.
-
Authors are responsible for verifying the accuracy and appropriateness of any AI-generated outputs.
-
Citation of AI-generated content as a primary source of information or content is unacceptable.
-
Authors must check translation accuracy and grammar correction suggested by AI tools.
For Peer Reviewers
Peer reviewers should be aware that one of the main trust of peer review is confidentiality. Using AI tools may compromise this trust as information on the Internet is not confidential. Hence, uploading any manuscript or part thereof into an AI tool may violate the confidential nature of peer review.
-
Peer reviewers must not enter any information from a submitted manuscript into an AI model/LLM.
-
Peer reviewers may also be required to evaluate whether AI-generated inputs are acceptable and meet the journal guidelines.
For Editors
Editors continue to hold authors accountable for producing unbiased quality content, regardless of how this content is generated.
-
Editors are responsible for sharing standards and policies for appropriate and transparent use of AI with authors and peer reviewers.
-
The role of editors includes implementing and managing AI-like tools to help improve the efficiency of the manuscript submission and editorial and peer review processes (e.g., checking for submitted manuscripts similarity with other content or plagiarism and matching peer reviewers with manuscripts via key words), incorporating these tools effectively into the editorial process.
-
Editors should not base editorial decisions solely on assessments generated by AI tools (e.g., software used to attempt to identify if content may have been generated by AI or to predict acceptability or post-publication performance of submitted manuscripts).
-
Editors support authors in complying with guidelines for proper AI utilization and stay informed about advancements in AI technology to guide and facilitate the effective and ethical integration of AI in scholarly publishing.
-
Editors should clearly communicate policies on the use of AI in author and reviewer guidelines.
In summary, AI tools in scholarly publishing will become increasingly relevant as knowledge about and use of AI grows. Recommendations will be in a state of flux as editors and publishers review developments and implement policies and processes. Journal editors and publishers need to be acutely aware of this responsibility. They should inform and guide authors and peer reviewers of best practices, build capacity of editorial staff to use AI-like tools effectively within the manuscript submission and editorial processes, and develop policies to prevent and manage inappropriate use. The current recommendations, which are in line with international standards, will need careful constant review as circumstances change.
Acknowledgment
This guidance will be published in multiple journals that participate in the African Journal Partnership Program.
Data availability statement
The research data are available with the corresponding author.
References
-
Van Noorden R, Perkel JM. AI and science: what 1,600 researchers think. Nature. 2023; 621: 672-5. https://www.nature.com/articles/d41586-023-02980-0
-
Peres F. A literacia em saúde no ChatGPT: explorando o potencial de uso de inteligência artificial para a elaboração de textos acadêmicos. Cien Saude Colet. 2024; 29(1): e02412023. https://doi.org/0000-0003-2715-6622
-
Thorp HH. ChatGPT is fun, but not an author. Science. 2023; 379(6630): 313. https://doi.org/10.1126/science.adg7879
-
Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023; 613: 612. https://doi.org/10.1038/d41586-023-00191-1
-
Flanagin A, Kendall-Taylor J, Bibbins-Domingo K. Guidance for authors, peer reviewers, and editors on use of AI, language models, and Chatbots. JAMA. 2023; 330(8): 702-3. https://doi.org/10.1001/jama.2023.12500
-
Committee on Publication Ethics. 2023. Authorship and AI tools. COPE: Committee on Publication Ethics. February 13, 2023. https://publicationethics.org/cope-position-statements/ai-author
-
WAME. 2023. Chatbots, generative AI, and scholarly manuscripts. WAME: World Association of Medical Editors. Revised May 31, 2023. https://wame.org/page3.php?id=106
-
International Committee of Medical Journal Editors. 2023. Artificial intelligence (AI)-assisted technology, defining the role of authors and contributors. In Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. Updated May 31, 2023. https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html
-
Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023; 613(7945): 612. doi:10.1038/d41586-023-00191-1