• catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    105
    arrow-down
    12
    ·
    2 months ago

    To lie requires intent to deceive. LLMs do not have intents, they are statistical language algorithms.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      3
      ·
      2 months ago

      It’s interesting they call it a lie when it can’t even think but when any person is caught lying media will talk about “untruths” or “inconsistencies”.

      • MrVilliam@lemm.ee
        link
        fedilink
        English
        arrow-up
        16
        ·
        2 months ago

        Well, LLMs can’t drag corporate media through long, expensive, public, legal battles over slander/libel and defamation.

        Yet.

    • moakley@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      I’m not convinced some people aren’t just statistical language algorithms. And I don’t just mean online; I mean that seems to be how some people’s brains work.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        3
        ·
        2 months ago

        I’ve read the article. If there is any dishonesty, it is on the part of the model creator or LLM operator.

      • gravitas_deficiency@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        5
        ·
        2 months ago

        You need to understand that lemmy has a lot of users that actually understand neural networks and the nuanced mechanics of machine learning FAR better than the average layperson.

        • venusaur@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          2 months ago

          And A LOT of people who don’t and blindly hate AI because of posts like this.

        • thedruid@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          7
          ·
          2 months ago

          That’s a huge, arrogant and quite insulting statement. Your making assumptions based on stereotypes

            • thedruid@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              2 months ago

              No. You’re mad at someone who isn’t buying that a. I. 's are anything but a cool parlor trick that isn’t ready for prime time

              Because that’s all I’m saying. The are wrong more often than right. They do not complete tasks given to them and they really are garbage

              Now this is all regarding the publicly available a. Is. What ever new secret voodoo one. Think has or military has, I can’t speak to.

              • gravitas_deficiency@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 months ago

                Uh, just to be clear, I think “AI” and LLMs/codegen/imagegen/vidgen in particular are absolute cancer, and are often snake oil bullshit, as well as being meaningfully societally harmful in a lot of ways.

            • thedruid@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              8
              ·
              2 months ago

              You’re just as bad.

              Let’s focus on a spell check issue.

              That’s why we have trump

  • Randomgal@lemmy.ca
    link
    fedilink
    English
    arrow-up
    17
    ·
    2 months ago

    Exactly. They aren’t lying, they are completing the objective. Like machines… Because that’s what they are, they don’t “talk” or “think”. They do what you tell them to do.

  • daepicgamerbro69@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 months ago

    They paint this as if it was a step back, as if it doesn’t already copy human behaviour perfectly and isn’t in line with technofascist goals. sad news for smartasses that thought they are getting a perfect magic 8ball. sike, get ready for fully automated trollfarms to be 99% of commercial web for the next decade(s).

    • wischi@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      To be fair the Turing test is a moving goal post, because if you know that such systems exist you’d probe them differently. I’m pretty sure that even the first public GPT release would have fooled Alan Turing personally, so I think it’s fair to say that this systems passed the test at least since that point.

  • Ogmios@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    2 months ago

    I mean, it was trained to mimic human social behaviour. If you want a completely honest LLM I suppose you’d have to train it on the social behaviours of a population which is always completely honest, and I’m not personally familiar with such.

    • wischi@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      2 months ago

      AI isn’t even trained to mimic human social behavior. Current models are all trained by example so they produce output that would score high in their training process. We don’t even know (and it’s likely not even expressable in language) what their goals are but (anthropomorphised) are probably more like “Answer something that humans that designed and oversaw the training process would approve of”