Ethical consideration of the use of generative artificial intelligence, including ChatGPT in writing a nursing article

Article information

Child Health Nurs Res. 2023;29(4):249-251
Publication date (electronic) : 2023 October 31
doi : https://doi.org/10.4094/chnr.2023.29.4.249
Professor, Department of Parasitology and Institute of Medical Education, College of Medicine, Hallym University, Chuncheon, Korea
Corresponding author Sun Huh Department of Parasitology, College of Medicine, Hallym University, 1 Hallimdaehak-gil, Chuncheon 24252, Korea TEL: +82-33-248-2652 FAX: +82-33-256-3426 E-MAIL: shuh@hallym.ac.kr
Received 2023 October 15; Accepted 2023 October 24.

After the introduction of ChatGPT (generative pre-trained transformer, GPT) to the public by OpenAI Co. on November 30, 2022 [1], concern about the artificial intelligence (AI) chatbot and generative AI was also raised rapidly by the researchers and editors. Although generative AI, including AI chatbot, has been developed since 2016, the public and professionals' interest exploded after the open use of ChatGPT. This editorial will discuss the ethical issues for using generative AI like ChatGPT in writing nursing articles.

1. Common Research and Publication Ethics Issues

Common research and publication ethics issues in writing articles include the following: first, research ethics issues, conflict of interest statement, statement of human and animal rights, statement of informed consent and institutional review board approval in the human population studies, and registration of the clinical trial research; second, as publication ethics issues, fabrication, falsification, plagiarism, authorship dispute, and duplicate publication.

As for research ethics issues, generative AI has no role because these issues depend on the researchers' activities. Of publication ethics issues, authorship disputes had been frequently discussed, originating from a co-authorship nursing article with ChatGPT published in Nurse Education in Practice [2]. Afterward, the editor of the journal removed ChatGPT from the co-author list because "it does not qualify for authorship according to the journal's guide for authors and to Elsevier's Publishing Ethics Policies" and "it is acknowledged as making a substantial contribution to the writing of the paper [3]." Law professor Lee [4] said that generative AI could not be an author since it is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work." The previous case of authorship of generative AI has not appeared as an author from the PubMed database. However, Lee [4] also pointed out that "from the perspective of research ethics, if an AI chatbot makes a significant contribution to research and can explain and prove the research results, it would be reasonable to recognize its authorship."

The text automatically generated by an AI chatbot cannot be a copyrighted work [4]. However, plagiarism is a different issue from copyright. Plagiarism and duplicate publication are the same in the copy of others' work without citation. The difference is that there is no co-author in two different articles for plagiarism, while at least one co-author is present in two separate articles for duplicate publication. The sentences and text by generative AI can not be screened by the previous plagiarism check program, including Similarity Check (Crosscheck) [5] or CopyKiller (https://www.copykiller.com/). The International Committee of Medical Journal Editors (ICMJE) recommends that "Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI." It is uncertain how to cite the work generated by AI chatbots. The common method is to cite the original source from the AI chatbot-generated work, if possible. Otherwise, add the AI-generated work as a supplement to verify the generated work by the AI chatbot. Those two methods can evade plagiarism or duplicate publication issues.

Fabrication or falsification is possible if the user of generative AI writes a prompt for that purpose. It is different from the simulated data production for the test of the dataset for the theoretical model. It is not easy to detect if the author made fabricated data through generative AI. Also, falsification of data can not be detected easily. It is the same case when humans make fabricated or falsified data. Still, there is no case report of fabrication or falsification using generative AI, although it is demonstrated to be easily made [6,7].

2. Korean Nursing Journal's Policy on Using Generative AI for Writing Articles

Only Scopus-indexed nursing journals published by nursing societies in Korea are searched for their inclusion of the policy on using generative AI for writing articles. Table 1 shows the results.

Eleven Scopus-Indexed Korean Nursing Journals' Policies on Using Generative Artificial Intelligence (AI) for Writing Articles (Cited 2023 October 2)

It was found that only Asian Nursing Research announced the AI policy. It may be possible since Elsevier has published it. It verified the impossibility of authorship of AI, disclosure of using AI in writing, and the author's responsibility for the credibility of the content. However, there was no announcement on checking the text generated by AI. It is time for nursing society journals in Korea to mention policies on using generative AI for writing articles. Inappropriate or false answers [8] are another issue besides the above ethical considerations. It is the authors' responsibility for content credibility, as indicated in the policies by Elsevier journals.

The research and publication issues on using generative AI in writing nursing articles were briefly introduced. It is common for researchers to use AI chatbots in research and article writing. According to the above suggestion, the editors and reviewers will be more confident in reviewing manuscripts that contain the work generative by AI chatbot.

Notes

Authors' contribution

All the work was done by Sun Huh.

Conflict of interest

No existing or potential conflict of interest relevant to this article was reported.

Funding

This study was supported by a grant from the National Research Foundation of Korea (NRF) and the Ministry of Education, Korean Government as a Research Ethics Activities Support Project (NRF-2023J1A1A1A01093462).

Data availability

Please contact the corresponding author for data availability.

Acknowledgements

None.

References

1. OpenAI. ChatGPT [Internet]. [cited 2023 October 1]. Available from: https://chat.openai.com.
2. O'Connor S. Open artificial intelligence platforms in nursing education: tools for academic progress or abuse? Nurse Education in Practice 2023;66:103537. https://doi.org/10.1016/j.nepr.2022.103537.
3. O'Connor S. Corrigendum to "Open artificial intelligence platforms in nursing education: tools for academic progress or abuse?" [Nurse Educ. Pract. 66 (2023) 103537]. Nurse Education in Practice 2023;67:103572. https://doi.org/10.1016/j.nepr.2023.103572.
4. Lee JY. Can an artificial intelligence chatbot be the author of a scholarly article? Journal of Educational Evaluation for Health Professions 2023;20:6. https://doi.org/10.3352/jeehp.2023.20.6.
5. Lammey R. CrossRef tools for small publishers. Science Editing 2015;2(2):79–85. https://doi.org/10.6087/kcse.48.
6. Elali FR, Rachid LN. AI-generated research paper fabrication and plagiarism in the scientific community. Patterns (N Y) 2023;4(3):00706. https://doi.org/10.1016/j.patter.2023.100706.
7. Gu J, Wang X, Li C, Zhao J, Fu W, Liang G, et al. AI-enabled image fraud in scientific publications. Patterns (N Y) 2022;3(7):100511. https://doi.org/10.1016/j.patter.2022.100511.
8. Huh S. Are ChatGPT's knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study. Journal of Educational Evaluation for Health Professions 2023;20:1. https://doi.org/10.3352/jeehp.2023.20.1.

Article information Continued

Table 1.

Eleven Scopus-Indexed Korean Nursing Journals' Policies on Using Generative Artificial Intelligence (AI) for Writing Articles (Cited 2023 October 2)

Journal title AI policies Authorship Check for AI writing Disclosure Credibility
Asian Nursing Research Yes Yes No Yes Yes
Child Health Nursing Research No
Journal of Korean Academic Society of Nursing Education No
Journal of Korean Academy of Community Health Nursing No
Journal of Korean Academy of Nursing No
Journal of Korean Academy of Nursing Administration No
Journal of Korean Academy of Psychiatric and Mental Health Nursing No
Journal of Korean Gerontological Nursing No
Journal of the Korean Academy of Fundamentals of Nursing No
Korean Journal of Adult Nursing No
Korean Journal of Women Health Nursing No