It’s pretty easy to see the problem here: The Internet is brimming with misinformation, and most large language models are trained on a massive body of text obtained from the Internet.

Ideally, having substantially higher volumes of accurate information might overwhelm the lies. But is that really the case? A new study by researchers at New York University examines how much medical information can be included in a large language model (LLM) training set before it spits out inaccurate answers. While the study doesn’t identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.

  • Ech@lemm.ee
    link
    fedilink
    English
    arrow-up
    28
    ·
    17 hours ago

    Really satisfying to see llm instead of “ai” in that headline.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    19
    ·
    18 hours ago

    Kinda shows there is a limit to how far you can get simply ingesting all text that exists. At some point, someone is going to need to curate perhaps billions of documents, which just based on volume will necessarily be done by people unqualified to really do so. And even if it were possible for a small group of people to curate such a data set, it would become an enormously political position to be in.

    • br3d@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      15 hours ago

      We did curation of existing knowledge for years, in the form of textbooks and reference works. This is just people thinking they can get the same benefits without the expense, and it’ll come crashing down soon enough when people see that you need to handle concepts, not just surface words with a superficial autocomplete

      • Ech@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        Weird that they don’t just…you know…copy that.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      It’s ok; we’ll just point more LLMs trained in data curation at the data. WCGW?

      I swear to god, I feel like all of these LLM circlejerking shills have systematically forgotten one of the foundational points of computer science: garbage in, garbage out.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      17 hours ago

      Even curation seems unlikely to fix the problem. I bet a new algorithm is required that allows LLMs to validate their response before it’s returned. Basically an “inner monologue” to avoid saying stupid things.

  • ribhu@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    15 hours ago

    How old is this study? The LLMs mentioned are Llama 2 and GPT 3.5 which in current terms are almost archaic

    • Zron@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      15 hours ago

      Unfortunately, it’s a lot harder to rigorously test something than it is to shit a new product out into the wild with no regard for its impact.