Large language models (LLMs) like ChatGPT can generate and revise text with human-level performance. These models come with clear limitations, can produce inaccurate information, and reinforce existing biases. Yet, many scientists use them for their scholarly writing. But how widespread is such LLM usage in the academic literature? To answer this question for the field of biomedical research, we present an unbiased, large-scale approach: We study vocabulary changes in more than 15 million biomedical abstracts from 2010 to 2024 indexed by PubMed and show how the appearance of LLMs led to an abrupt increase in the frequency of certain style words. This excess word analysis suggests that at least 13.5% of 2024 abstracts were processed with LLMs. This lower bound differed across disciplines, countries, and journals, reaching 40% for some subcorpora. We show that LLMs have had an unprecedented impact on scientific writing in biomedical research, surpassing the effect of major world events such as the COVID pandemic.

  • renzhexiangjiao@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    6
    ·
    16 hours ago

    tbh I don’t see anything wrong with using AI just to write the abstract, assuming the author redacts it afterwards. It becomes much more problematic if AI is used in the middle section of the paper, where it is crucial to present information as accurately as possible.