LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.

“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”

Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.

  • Archmage Azor@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    12 hours ago

    Why is it that whenever one of these tests are performed on AIs they only test LLMs? It’s like forcing an English major to take a history test.

    Do it again with an AI trained on historical data.

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 hours ago

      LLMs are general purpose models trained on text. The thougbt is they should be able to address anything that can be represented in a textual format.

      While you could focus the model by only providing specific types of text, the general notion is they should be able to handle tasks ranging across different domains/disciplines.