• 𝘋𝘪𝘳𝘬@lemmy.ml
    link
    fedilink
    English
    arrow-up
    56
    ·
    20 days ago

    So at least 22 papers from the study were AI generated and not checked afterwards.

    This says more about the authors the AI users who claim authorship than about AI.

    • toy_boat_toy_boat@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      20 days ago

      i am not in any way qualified to say what i’m about to say, so you should probably just stop reading.

      awt awt awt awt a tawr tat awt aw ta awrt gawr tgar a aiuknalrghber,jhmngbae,rkjgaat aawt aaaera r aw aergaaegaebaen,rjhbae,rjgabear aw awr awr aw awert

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        20 days ago

        At least one major paper did, although it used AI images instead of text.

        There was a paper with AI generated diagrams that not only passed peer review somehow, btu was published in a pretty major reputable journal.

        You’d have normally expected them to catch it in peer review and decline to publish, especially as they took it down later.

        • canihasaccount@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          20 days ago

          Nothing in the Frontiers is reputable among scientists. It gets linked a lot on Reddit because it’s open access, but scientists tend to view it as essentially the not-actually-peer-reviewed equivalent of a preprint. In the past, if all reviewers recommend rejection at Frontiers, the editor would be forcibly assigned new reviewers by the publishing staff. This would continue until the manuscript would get accepted. Not sure if that’s still the same (I’ve blocked all Frontiers emails), but it’s not correct to call a Frontiers journal a major reputable journal.

  • Septimaeus@infosec.pub
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    20 days ago

    There was a comment yesterday that offered a simpler explanation than the headline’s conclusion.

    The papers were published by Iranian researchers and in Farsi “scanning” (روبشی) and “vegetative” (رويشی) differ only by one character (ب and یـ) which also happen to be adjacent on the keyboard.

    That is, there’s some evidence that this is a typo or mistranslation that has been reused among non-native speakers, as opposed to a hallucination. If so, it could still be a LM replicating the error, but I’ve definitely seen humans do the exact same thing, especially when there’s a strong language barrier.

    Edit: brevity

    • bitcrafter@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      20 days ago

      A couple of decades ago I got really confused because I found a lot of papers referring to “comer” cubes, but could not find an actual definition. Eventually I figured out that these were actually “corner” cubes, but somewhere a transcription error occurred that merged the r and n into an m, and this error kept getting propagated because people were just copying and pasting.

      • Septimaeus@infosec.pub
        link
        fedilink
        English
        arrow-up
        3
        ·
        19 days ago

        That’s an apt example from English, especially given the visual similarity of the error.

        It’s the kind of error we would expect AI to be especially resilient against, since the phrase “corner cube” probably appears many times in the training dataset.

        Likewise scanning electron microscopes are common instruments in many schools and commercial labs, so an AI writing tool is likely to infer a correction needed given the close similarity.

        Transcription errors by human authors, however, have been dutifully copied into future works since we began writing stuff down.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      20 days ago

      Yes. Between that and some bad OCR not recognizing text in columns, causing it to see these words in separate columns as a single phrase, it makes sense that it would be replicated in machine translations.

  • MonkderVierte@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    20 days ago

    don’t make sense. Kind of like this AI-generated image.

    Ancient optoelectronic circuitry from the future?