ChatGPT's ability to generate realistic experimental images poses a new challenge to academic integrity

J Hematol Oncol. 2024 May 1;17(1):27. doi: 10.1186/s13045-024-01543-8.

Abstract

The rapid advancements in large language models (LLMs) such as ChatGPT have raised concerns about their potential impact on academic integrity. While initial concerns focused on ChatGPT's writing capabilities, recent updates have integrated DALL-E 3's image generation features, extending the risks to visual evidence in biomedical research. Our tests revealed ChatGPT's nearly barrier-free image generation feature can be used to generate experimental result images, such as blood smears, Western Blot, immunofluorescence and so on. Although the current ability of ChatGPT to generate experimental images is limited, the risk of misuse is evident. This development underscores the need for immediate action. We suggest that AI providers restrict the generation of experimental image, develop tools to detect AI-generated images, and consider adding "invisible watermarks" to the generated images. By implementing these measures, we can better ensure the responsible use of AI technology in academic research and maintain the integrity of scientific evidence.

Keywords: Academic integrity; Artificial intelligence; ChatGPT; DALL-E; Experimental images; Large language model; Western Blot.

Publication types

  • Research Support, Non-U.S. Gov't
  • Letter

MeSH terms

  • Artificial Intelligence
  • Biomedical Research* / methods
  • Humans
  • Image Processing, Computer-Assisted / methods
  • Software