• oyo@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        And the system doesn’t know either.

        For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.

        • xantoxis@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Accurate.

          No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.