LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.

“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”

Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.

  • Jubei Kibagami@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    7 hours ago

    I mean, it’s probably better that it doesn’t understand it and then trying to end us like Ultron 😅

  • Archmage Azor@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    5 hours ago

    Why is it that whenever one of these tests are performed on AIs they only test LLMs? It’s like forcing an English major to take a history test.

    Do it again with an AI trained on historical data.

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      LLMs are general purpose models trained on text. The thougbt is they should be able to address anything that can be represented in a textual format.

      While you could focus the model by only providing specific types of text, the general notion is they should be able to handle tasks ranging across different domains/disciplines.

  • CircuitGuy@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    5 hours ago

    I read some of it, but I find it funny because it should be a joke for the bar to be so ridiculously high for a new technology: understanding human history.

  • QuarterSwede@lemmy.world
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    6
    ·
    edit-2
    1 day ago

    Ugh. No one in the mainstream understands WHAT LLMs are and do. They’re really just basic input output mechanisms. They don’t understand anything. Garbage in, garbage out as it were.

    • drosophila@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      9 hours ago

      Specifically they are completely incapable of unifying information into a self consistent model.

      To use an analogy you see a shadow and know its being cast by some object with a definite shape, even if you can’t be sure what that shape is. An LLM sees a shadow and its idea of what’s casting it is as fuzzy and mutable as the shadow itself.

      Funnily enough old school AI from the 70s, like logic engines, possessed a super-human ability for logical self consistancy. A human can hold contradictory beliefs without realizing it, a logic engine is incapable of self-contradiction once all of the facts in its database have been collated. (This is where the SciFi idea of robots like HAL-9000 and Data from Star Trek come from.) However this perfect reasoning ability left logic engines completely unable to deal with contradictory or ambiguous information, as well as logical paradoxes. They were also severely limited by the fact that practically everything they knew had to be explicitly programmed into them. So if you wanted one to be able to hold a conversion in plain English you would have to enter all kinds of information that we know implicitly, like the fact that water makes things wet or that most, but not all, people have two legs. A basically impossible task.

      With the rise of machine learning and large artificial neural networks we solved the problem of dealing with implicit, ambiguous, and paradoxical information but in the process completely removed the ability to logically reason.

    • Epzillon@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      I just like the analogy of a dashboard with knobs. Input text on one wide output text on the other. “Training” AI is simply letting the knobs adjust themselves based on feedback of the output. AI never “learns” it only produces output based on how the knobs are dialed in. Its not a magic box, its just a lot of settings converting data to new data.

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        Do you think real “understanding” is a magic process? Why would LLMs have to be “magic” in order to understand things?

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      2
      ·
      23 hours ago

      They’re really just basic input output mechanisms.

      I mean, I’d argue they’re highly complex I/O mechanisms, which is how you get weird hallucinations that developers can’t easily explain.

      But expecting cognition out of a graph is like demanding novelty out of a plinko machine. Not only do you get out what you get in, but you get a very statistically well-determined output. That’s the whole point. The LLM isn’t supposed to be doing high level cognitive extrapolations. It’s supposed to be doing statistical aggregates on word association using a natural language schema.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      23 hours ago

      That is accurate, but people who design and distribute the LLMs refer to the process as machine learning and use terms like hallucinations which is the primary cause of the confusion.

      • SinningStromgald@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        4
        ·
        23 hours ago

        I think the problem is the use of the term AI. Regular Joe Schmo hears/sees AI and thinks Data from ST:NG or Cylons from Battlestar Galactica and not glorified search engine chatbots. But AI sounds cooler than LLM so they use AI.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          20 hours ago

          The term is fine. Your examples are very selective. I doubt Joe Schmo thought the aimbots in CoD were truly intelligent when he referred to them as AI.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    1 day ago

    This isn’t new and noteworthy … because we humans don’t understand human history and fail miserably to understand or remember past failings in every generation.

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    24 hours ago

    For over a decade, complexity scientist Peter Turchin and his collaborators have worked to compile an unparalleled database of human history – the Seshat Global History Databank. Recently, Turchin and computer scientist Maria del Rio-Chanona turned their attention to artificial intelligence (AI) chatbots, questioning whether these advanced models could aid historians and archaeologists in interpreting the past.

    Peter Turchin and his collaborators don’t have a great record of understanding human history themselves—their basic shtick has been to try to validate an Enlightenment-era, linear view of human history with statistics from their less-than-rigorous database, with less-than-impressive results. I wouldn’t necessarily expect an AI to outperform them, but I wouldn’t trust their evaluation of it, either.

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    24 hours ago

    LLMs demonstrated greater accuracy when addressing questions about ancient history, particularly between 8,000 BCE and 3,000 BCE, but struggled with more recent events, especially from 1,500 CE to the present.

    I’m not entirely surprised by this. Llms are trained on the whole internet and not just the good part. There are groups online that are very vocal about things like the confederates being in the right for example. It would make sense to assume this essentially poisons the datasets. Realistically, no one is contesting history before that time.

    Not that it isn’t a problem and doesn’t need fixing, just that it makes “sense”.

  • A_A@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    22 hours ago

    Suppose A and B are at war and based on every insults they throw at each other, you train an LLM to explain what’s going on. Well, it will be quite bad. Maybe this is some part of the explanation.

    • Flying Squid@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      22 hours ago

      But that’s exactly the problem. Humans with degrees in history can figure out what is an insult and what is a statement of fact a hell of a lot better than an LLM.

      • A_A@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        22 hours ago

        it took maybe thousands or even millions of years for nature to create animals that understand who they are and what’s going on around them. Give those machines a few more years, they are not all LLMs and they are advancing quite rapidly.
        Finally, i completely agree with you that, for the time being, they are very bad at playing historian.