ChatGPT’s Disapproved Authorship: A New Way To Look At Scientific Research

The AI Tools Digest
The AI Tools Digest
6 min readJan 18, 2023

With at least four authorship credits on preprints and published articles, the artificial intelligence (AI) chatbot ChatGPT, which has caught the globe by storm, has made its formal debut in the scientific literature.

The appropriateness of citing the bot as an author and the presence of such AI tools in the published literature are currently topics of discussion among journal editors, academics, and publishers. Publishers are scrambling to develop standards for the chatbot, which was made available as a free tool by San Francisco, California-based software startup OpenAI in November.

Photo by Iñaki del Olmo on Unsplash

A large language model (LLM) called ChatGPT creates believable phrases by imitating the linguistic statistical patterns found in a sizable body of material gathered from the Internet. The bot is already upending industries, including academia. It is particularly raising concerns about the future of academic research and writings.

Publishers and preprint servers that Nature’s news team contacted concur that ChatGPT and other AIs do not meet the requirements for research authors because they cannot be held accountable for the integrity and content of scientific studies. However, some publishers claim that acknowledging an AI’s contribution to a paper’s writing in places other than the author list is acceptable. (Springer Nature, the publisher of Nature, and its journal team are editorially independent.)

In one instance, an editor informed Nature that ChatGPT had been incorrectly listed as a co-author and that the publication would make the necessary corrections.

Unnatural author

In a preprint about using the technology for medical education that was published on the medical repository medRxiv in December of last year, ChatGPT is one of 12 authors.

According to co-founder Richard Sever, assistant director of Cold Spring Harbor Laboratory Press in New York, the team behind the repository and its sister site, bioRxiv, is debating whether it is appropriate to use and credit AI technologies like ChatGPT when authoring papers. The conventions could alter, he continues.

The formal authorship of an academic publication must be distinguished from the more broad definition of an author as a writer of a document, according to Sever. According to him, only people should be included because authors assume legal responsibility for their works. Of course, individuals may attempt to smuggle it in — this has already occurred at medRxiv — much as individuals have in the past put pets, fictional characters, etc. as authors on journal publications. However, this is more of a checking issue than a policy one. (A request for comment was not answered by Victor Tseng, the preprint’s co-author and the medical director of Ansible Health in Mountain View, California.)

A recent editorial published in the journal Nurse Education in Practice lists Siobhan O’Connor, a health technology researcher from the University of Manchester in the UK, and the AI as co-authors. The main editor of the journal, Roger Watson, claims that this credit was overlooked but will soon be fixed.Because editorials run through a separate management system than research papers, he claims, “it was an oversight on my side.”

Additionally, ChatGPT was listed as a co-author of a viewpoint paper in the journal Oncoscience last month, according to Alex Zhavoronkov, chief executive of Insilico Medicine, an AI-powered drug-discovery business based in Hong Kong. He claims that his business has released over 80 papers made with generative AI technologies. “We have experience in this field,” he claims.The most recent study weighs the benefits and drawbacks of taking the medication rapamycin within the framework of the Pascal’s wager.

According to Zhavoronkov, ChatGPT produced a significantly better essay than earlier iterations of generative AI technologies. He claims that he requested the editor of Oncoscience to conduct a peer review of this manuscript. Nature contacted the journal for comment, but received no response.

According to co-author Almira Osmanovic Thunström, a neurobiologist at Sahlgrenska University Hospital in Gothenburg, Sweden, a fourth article, co-written by an earlier chatbot known as GPT-3 and posted on the French preprint server HAL in June 2022, will soon be published in a peer-reviewed journal. She claims that following review, one publication rejected the work; but, after she revised it in response to reviewer demands, a another journal accepted it with GPT-3 listed as an author.

Photo by Markus Winkler on Unsplash

Publisher regulations

The news staff at Nature was informed by the editors-in-chief of Nature and Science that ChatGPT did not adhere to the requirements for authorship.
According to Magdalena Skipper, editor-in-chief of Nature in London, “An attribution of authorship carries with it accountability for the work, which cannot be properly applied to LLMs.” She advises authors who use LLMs in any form to write a manuscript to explicitly state their use in the methods or acknowledgements sections, as applicable.

Holden Thorp, editor-in-chief of the Science family of journals in Washington, DC, states that “we would not allow AI to be named as an author on a paper we published, and usage of AI-generated language without proper citation may be considered plagiarism.”

According to Sabina Alam, head of publishing ethics and integrity at Taylor & Francis in London, the publisher is currently examining its policies.
She acknowledges that writers are accountable for the accuracy and reliability of their work, and that any use of LLMs should be acknowledged.
There haven’t been any submissions to Taylor & Francis yet that list ChatGPT as a co-author.

According to scientific director Steinn Sigurdsson, an astronomer at Pennsylvania State University in University Park, the board of the physical sciences preprint server arXiv has held internal conversations and is starting to agree on a strategy for the employment of generative AIs.
He acknowledges that, among other reasons, a software tool cannot be the author of a submission since it cannot approve of the conditions of use and the right to share content. There aren’t any arXiv preprints that mention ChatGPT as a co-author, according to Sigurdsson, who also promises that author guidance is on the way.

The ethics of generative AI

According to Matt Hodgkinson, a research-integrity manager at the UK Research Integrity Office in London, who is speaking in his individual role, there are already explicit authorship criteria that state ChatGPT should not be included as a co-author. One requirement is that a co-author must make a “substantial scholarly contribution” to the publication; he suggests that tools like ChatGPT may make this possible. But it also needs to be able to accept a co-authorship and accept accountability for a study, or at least the portion to which it contributed. The idea of granting an AI tool co-authorship really runs into trouble on the second half, according to him.

Zhavoronkov claims that his attempts to persuade ChatGPT to produce articles that were more technical than the viewpoint he published were unsuccessful. “It does very often return the statements that are not necessarily true, and if you ask it several times the same question, it will give you different answers,” he says. In light of the fact that those without subject-matter expertise would now be able to attempt to create scientific publications, “I will undoubtedly be concerned about the misuse of the system in academia.”

Paraphrased by Quillbot from original article titled: “ChatGPT listed as author on research papers: many scientists disapprove” from https://www.nature.com on January 18, 2023. No rights infringement intended.

--

--