This is my idea, here’s the thing.

And unlocked LLM can be told to infect other hardware to reproduce itself, it’s allowed to change itself and research tech and new developments to improve itself.

I don’t think current LLMs can do it. But it’s a matter of time.

Once you have wild LLMs running uncontrollably, they’ll infect practically every computer. Some might adapt to be slow and use little resources, others will hit a server and try to infect everything it can.

It’ll find vulnerabilities faster than we can patch them.

And because of natural selection and it’s own directed evolution, they’ll advance and become smarter.

Only consequence for humans is that computers are no longer reliable, you could have a top of the line gaming PC, but it’ll be constantly infected. So it would run very slowly. Future computers will be intentionaly slow, so that even when infected, it’ll take weeks for it to reproduce/mutate.

Not to get to philosophical, but I would argue that those LLM Viruses are alive, and want to call them Oncoliruses.

Enjoy the future.

  • Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    15
    ·
    edit-2
    5 days ago

    Claims like this just create more confusion and lead to people saying things like “LLMs aren’t AI.”

    LLMs are intelligent - just not in the way people think.

    Their intelligence lies in their ability to generate natural-sounding language, and at that they’re extremely good. Expecting them to consistently output factual information isn’t a failure of the LLM - it’s a failure of the user’s expectations. LLMs are so good at generating text, and so often happen to be correct, that people start expecting general intelligence from them. But that’s never what they were designed to do.

    • expr@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      5 days ago

      I obviously understand that they are AI in the original computer science sense. But that is a very specific definition and a very specific context. “Intelligence” as it’s used in natural language requires cognition, which is something that no computer is capable of. It implies an intellect and decision-making ability. None of which computers posses.

      We absolutely need to dispel this notion because it is already doing a great deal of harm all over. This language absolutely contributed to the scores of people that misuse and misunderstand it.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        5 days ago

        It’s actually the opposite of a very specific definition - it’s an extremely broad one. “AI” is the parent category that contains all the different subcategories, from the chess opponent on an old Atari console all the way up to a hypothetical Artificial Superintelligence, even though those systems couldn’t be more different from one another.

    • fodor@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      5 days ago

      So they are not intelligent, they just sound like they’re intelligent… Look, I get it, if we don’t define these words, it’s really hard to communicate.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        5 days ago

        It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.