There is a first time for everything, and the day finally came (March 7, 2023) when, as an editor, A.C.O. found herself requesting confirmation that an artificial intelligence (AI) was not the author of a manuscript submitted for potential publication in our journal. Her exact words were, “Provide a disclaimer at the end of the manuscript confirming that you and not ChatGPT wrote the article,” the corresponding author was Saliba T. and the article in question, “ChatGPT: A radiologist’s perspective” [1].

In a thought-provoking Letter to the Editor, the authors summarise the current strengths and weaknesses of the software tool, ChatGPT (Chat Generative Pre-trained Transformer).

The editors had of course heard about and even read some articles on ChatGPT (superlative articles bordering on sensationalistic from the media, more considered in the scientific literature). However, it was not until receiving the paper by Saliba and Boitsios [1] that the Editors could not resist the temptation and tried ChatGPT themselves. One example is included below:

Question (Prof. A.C. Offiah):

In 250 words, critically review a 2023 journal article in Pediatric Radiology in which the authors, Saliba T. and Boitsios G. provide a radiologist’s perspective on ChatGPT. They conclude with the words of Curtis Langlotz that “AI won’t replace radiologists, but radiologists who use AI will replace those who don’t”.

Answer (ChatGPT):

The article by Saliba T. and Boitsios G. titled “A Radiologist’s Perspective on ChatGPT” provides an interesting insight into the use of artificial intelligence (AI) in the field of radiology. The authors have presented a comprehensive overview of the current state of AI in radiology and the potential benefits and challenges on integrating AI into clinical practice.

The authors highlight the potential of ChatGPT, a large language model trained by OpenAI, in assisting radiologists with report generation, differential diagnosis, and image interpretation. They also acknowledge the challenges associated with AI, including concerns regarding accuracy, reliability, and ethical issues. The authors rightly suggest that AI should not replace radiologists, but rather complement their skills and expertise.

The authors have effectively presented the views of various experts in the field and have cited relevant studies to support their arguments. However, the article lacks a critical analysis of the limitations of ChatGPT and its potential impact on the role of radiologists. For instance, the authors do not discuss the potential negative consequences of over-reliance on AI and the possibility of radiologists losing their interpretive skills.

Furthermore, while the authors provide some insight into the ethical issues surrounding the use of AI in radiology, they do not delve deeper into the potential impact of AI on patient care, patient privacy, and data security. Given the growing concerns regarding the ethical implications of AI in healthcare, this is a significant limitation of the article.

In conclusion, the article by Saliba T. and Boitsios G. provides a useful overview of the potential benefits and challenges of using AI in radiology. However, the article lacks a critical analysis of the limitations and potential negative consequences of AI, and the potential ethical implications of its use in clinical practice. The authors’ conclusion that radiologists who use AI will replace those who don’t, while a catchy quote, seems somewhat oversimplistic and lacking in nuance.

The above review by ChatGPT is not at all bad, considering that the response was derived in seconds, without having had sight of the original article. This capability is why many are worried that the introduction of ChatGPT heralds a rapid and significant change in education, science and research as we know it.

From an editor’s perspective, there is concern that manuscripts may not have been authored by humans and indeed one study (in preprint) records that 32% of ChatGPT-generated abstracts (based on completely generated data) misled scientists, who believed them to be original [2]. Important words of caution related to ethics, legal issues, innovation, accuracy, bias and transparency of ChatGPT are raised by Biswas, S., sections of whose article he has declared were written by ChatGPT and then edited by himself [3]. This last is the key—it is very difficult to police the use of ChatGPT—therefore, the question is whether authors (and reviewers [4]) should disclose that they have used ChatGPT and take full responsibility for any errors arising thereof.

The editors of the Science group of publications think not and have updated their editorial policies to specify that no aspects of any manuscript submitted to them (text, figures, images, graphics) can be generated by ChatGPT (unless ChatGPT is intentionally used as part of the research). A violation of their policy will be deemed to be scientific misconduct [5].

Springer Nature (the publishers of our journal, Pediatric Radiology) has taken a different view. The recently announced AI-related policy of Springer Nature can be summarized as follows: (1) AI writing tools can be used to write manuscripts and to conceptualize research ideas, (2) this contribution must be openly declared and (3) AI tools cannot be listed as authors [6]. An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to large language models. As the Managing Editors of Pediatric Radiology, we will update our Author Guidelines, henceforth mandating that authors declare the role played by AI tools, not as a disclaimer, but within the Materials and Methods section of their manuscripts. If a Materials and Methods section is not relevant to your manuscript, the Introduction section can be used to document the use of any AI tools.

These large language models can be a useful aid in scientific writing and editing, especially for non-native English speakers, and can even aid the author in adding accurately formatted references. However, authors should limit the use of these tools to topics that they are subject matter experts on to ensure that the information provided is accurate and up to date. No matter what AI tool is used, the authors remain responsible for the scientific integrity of their publications.