Avoid common mistakes on your manuscript.
Dear Editor,
We thank the authors for their insightful appraisal of our study [1], in their manuscript, “Chat GPTs Limited Accuracy in Generating Anatomical Images for Medical Education.” Their assessment of our research reiterates the current gap in the possibilities large language models offer and a realistic assessment of their current capabilities.
We concur with the assertion that diffusion models hold promise as valuable assets for generating medical illustrations, warranting further exploration. As duly noted by the authors, the development of such models necessitates bespoke datasets. To this end, we propose a compelling avenue: the integration of numerous medical illustrators, each tasked with producing diverse renditions of anatomical structures. This collective repository of imagery would furnish an optimal training corpus for the model, facilitating the acquisition and replication of analogous designs. Such an iterative process could be replicated across various subspecialties within radiology, potentially culminating in the establishment of a versatile language model adept at traversing diverse radiological domains or facilitating the creation of tailored, domain-specific models.
We harbor optimism that the discourse surrounding the utilization of text-to-image large language models (LLMs) for generating medical illustrations will catalyze constructive innovation. However, it is imperative for these advancements to remain cognizant of ethical considerations and potential biases, as delineated in our prior work [1, 2]. The impetus behind such innovation should aim to alleviate the burden on authors devoid of resources to engage professional medical illustrators, while also complementing the efforts of existing illustrative practitioners.
The prevailing sentiments among radiologists, illustrators, and stakeholders within the realm of LLM development will significantly shape the trajectory of innovation. Foremost among these sentiments is the apprehension surrounding the perceived threat of AI-induced job displacement [3]. While this apprehension is not unfounded, we, as authors, maintain a cautiously optimistic outlook. The foreseeable future promises a proliferation of AI tools geared towards streamlining workflows and enhancing productivity, with minimal impact on livelihoods. The growth ahead is incrementally going to be centered on this discussion and its outcomes.
References
Ajmera P, Nischal N, Ariyaratne S, Botchu B, Bhamidipaty K, Iyengar K, et al. Validity of ChatGPT-generated musculoskeletal images. Skeletal Radiol. 2024. https://doi.org/10.1007/s00256-024-04638-y.
Kumar A, Burr P, Young TM. Using AI text-to-image generation to create novel illustrations for medical education: current limitations as illustrated by hypothyroidism and Horner syndrome. JMIR Medical Education. 2024;10(1):e52155.
Iyengar KP, Yousef MMA, Nune A, Sharma GK, Botchu R. Perception of chat generative pre-trained transformer (Chat-GPT) AI tool amongst MSK clinicians. J Clin Orthop Trauma. 2023;44:102253.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Ajmera, P., Nischal, N., Ariyaratne, S. et al. Response to: ChatGPT’s limited accuracy in generating anatomical images for medical. Skeletal Radiol 53, 1597 (2024). https://doi.org/10.1007/s00256-024-04656-w
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00256-024-04656-w