To elaborate a little:

Since many people are unable to tell the difference between a “real human” and an AI, they have been documented “going rogue” and acting outside of parameters, they can lie, they can compose stories and pictures based on the training received. I can’t see AI as less than human at this point because of those points.

When I think about this, I think about that being the reason as to why we cannot create so called “AGI” because we have no proper example or understanding to create it and thus created what we knew. Us.

The “hallucinating” is interesting to me specifically because that seems what is different between the AI of the past, and modern models that acts like our own brains.

I think we really don’t want to accept what we have already accomplished because we don’t like looking into that mirror and seeing how simple our logical process’ are mechanically speaking.

  • UnfortunateShort@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    2
    ·
    edit-2
    2 months ago

    I can’t tell if you are serious, but as someone with a master in CS and some basic experience in neuroscience, I want to clarify a two things:

    • AIs can’t lie, because they neither know nor understand things
    • AIs are not like us and you can’t make them like us yet, because we don’t even fully understand how we work
    • venusaur@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      2 months ago

      While the mechanisms of hallucination aren’t the same, it can absolutely and does happen with humans. Somebody ever tell you something they thought to be true and it wasn’t? I’m sure you’ve even done it yourself. Maybe later you realize that it might not be true or you got something confused with something else in your fleshy Rolodex.

      • UnfortunateShort@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 months ago

        Lies are deliberate, hallucinations are mistakes. Talking about lying AIs implies that they have something akin to free will or an intrinsic motivation, which they simply do not have. They are an emotionless tool designed to give good answers. It is comparable to claiming the mechanism for generating speech in the human brain comes up with lies, which it obviously doesn’t, it just articulates them.

        I’m not saying humans can’t hullacinate, I am saying it is not the same as lying and that AIs can’t lie.

      • jaxxed@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 months ago

        This conversation often gets us to R. Penrose’s “consciousness is not computational”, from which we can retrace our steps with a separation of algorithmic processes. Is GenAI similar to the “stream-of-thought”? Perhaps, but does that lead to intelligence?

        • hoshikarakitaridia@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          Exactly.

          People always wanna classify AI as super smart or super dumb, similar to the human brain or randomly guessing words and doing an ok job. But that is very subjective and it’s sliding a little fader between two points that differ in definition slightly for every person.

          If we actually wanted to approach the question of “how intelligent are AIs compared to humans” we would need to write a lot of word definitions first, and I’m sure the answer at the end will be just as helpful as a shoulder shrug and an unenthusiastic “about half as intelligent”. And that’s why these comparisons are stupid.

          If AI is a good tool to us, great. If not, alright let’s skip and go straight to the next bigger discovery, and stop getting hung up on semantics.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      2 months ago

      Our brains are perfectly capable of lying to us, and do so all the time. Posted this yesterday:

      “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

      ― Peter Watts, Blindsight

      I’d say we’re not too capable in the understanding department either. And no, I’m not conflating LLMs with human intelligence, but LLMs have far more going on than lemmy will admit, and we have far less going on than we think.

  • Michal@programming.dev
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    2 months ago

    AI is a very broad term that includes more than machine learning. Assuming you mean LLM

    The differences are:

    • it does not learn from experience like humans do. They learn by training, which is separate from conversation where context window is limited
    • they only learn from text (thus name Language Model), so they do not understand other inputs like touch, sight, sound, taste, and many others
    • they do not think critically, take all input at face value, particularly that LLM cannot corroborate input information with experience from real world

    Also if you cannot tell difference between real human and AI it’s only because your interaction with AI is limited to text. If you can meet it like a real human, it’ll be obvious that it’s a computer not a person. If the image is blurry/pixelated enough, you couldn’t tell a car from a house, that doesn’t mean cars have become indistinguishable from houses.

    • DacoTaco@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      2 months ago

      To add to this, this is how llm sessions ‘get around’ the experience issue: with every query/command/whatever the whole context and passed conversation is sent to it to be reprocessed. This is why in long sessions it takes longer and longer to generate a new response and why it will forget everything it ‘learned’ from your session when starting a new one

  • PeriodicallyPedantic@lemmy.ca
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    2 months ago

    That depends on how hardcore of a fatalist you are.

    If you’re purely a fatalist, then free will is an illusion, laws and punishment are immoral, consciousness is meaningless, and we nothing more than deterministic pattern matching machines, making us only different from LLMs in the details of our implementation and from the terrible optimization that evolution is known for.

    But if you believe in some degree of free will, or you think there is value in consciousness, then we differ because LLMs are just auto-complete. They psudo-randomly choose from a weighted list of statistically likely words (actually token) that would come next given the context (which is the conversation history and prompt). There is no free will, no understanding any more than the man in the Chinese room understands Mandarin.

    The whole conversation is so full of charged words because the LLM providers have intentionally anthropomorphized LLMs in their marketing, by using words like “reasoning”. The APIs from before LLMs blew up provide a far less emotionally charged description of what LLMs do, with terms like “completions”.
    You wouldn’t compare a human mind to your phone keyboard word prediction, but it’s doing the same thing but scaled down. Where do you draw the line?

  • Opinionhaver@feddit.uk
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 months ago

    LLMs have more in common with humans than we tend to admit. In split-brain studies, humans have been shown to invent plausible-sounding explanations for their behavior - even when scientists know those explanations aren’t the real reason they acted a certain way. It’s not that these people are lying per se - they genuinely believe the explanations they’re coming up with. Lying implies they know what they’re saying is false.

    LLMs are similar in that way. They generate natural-sounding language, but not everything they say is true - just like not everything humans say is true either.

  • besselj@lemmy.ca
    link
    fedilink
    arrow-up
    4
    ·
    2 months ago

    The difference is that a human often has to be held accountable when they make a mistake, so most humans will use logic and critical thinking when trying hard not to make mistakes, even if it takes longer than an LLM whose “reasoning” is more like a slot machine.

    • Arkouda@lemmy.caOP
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      I would argue that AI should be held to account for the information it provides, and until AI is capable of having a personal bank account, damages should be paid by the company who created it.

      The only reason I see that AI doesn’t “hold itself to account” is that it was never programmed to. Much like if you do not properly educate a young human, they will not be held accountable a lot of the time because we understand their actions are the result of how they were brought up and taught, or “programmed”.

      You do bring up a good point, but I see that as a failing on the Humans making the AI and restricting it, not a demonstration that AI wouldn’t be capable of holding itself and its decisions to account if it was taught to like we need to be taught to.

  • nebulaone@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    2 months ago

    The only thing that can be said for sure is that the human brain uses both electricity and chemical reactions and seems to be capable of randomness, while the AI runs purely on electricity/code and isn’t capable of randomness.

    We don’t know what consciousness is and we don’t even know what life is so anything beyond this is pure speculation.

    PS: Some of the answers here are demonstrably wrong, but I have learned not to get into arguments online anymore, it’s better for your sanity.

  • NaibofTabr@infosec.pub
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    2 months ago

    The term “hallucinate” is a euphemism being pushed by the AI peddlers.

    It’s a computer program. It doesn’t “hallucinate”, it has errors.

    In all cases of ML models being sold by companies, what you are actually looking at is poorly tested software that is not fit for purpose, and has far less actual capability then what the marketing promises.

    “Hallucination” in the context of LLMs is marketing bullshit designed to deflect from the reality that none of these programs have been properly quality checked and are extremely error prone.

    If Excel gave bad answers for calculations 20% of the time it wouldn’t be “hallucinating”, it would just be broken, buggy software that requires more development time before distribution as a useful product.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      LLM “hallucinations” are only errors from a user expectations perspective. The actual purpose of these models is to generate natural-sounding language, not to provide factual answers. We often forget that - they were never designed as knowledge engines or reasoning tools.

      The fact that they often get things right isn’t because they “know” anything - it’s a side effect of being trained on data that contains a lot of correct information. So when they get things wrong, it’s not a bug in the traditional sense - it’s just the model doing what it was designed to do: predict likely word sequences, not truth. Calling that a “hallucination” isn’t marketing spin - it’s a useful way to describe confident output that isn’t grounded in reality.

  • Frezik@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    Let’s clear some terms. Intelligence and consciousness are separate things that our language tends to conflate. Consciousness is the interpretation of sensory input. Hallucinations are what happen when your consciousness is misinterpreting that data.

    You actually hallucinate to a minor degree all the time. For instance, pareidolia often takes the form of seeing human faces in rocks and clouds. Our consciousness is really tuned to patterns that look like human faces, and it sometimes gets it wrong.

    We can actually do this to image recognition models. A model was tuned to finding dogs in movies. It could then modify the movie to show what it thought was there. It was then deliberately overtrained, and it output a movie with dogs all over the place.

    The models definitely have some level of consciousness. Maybe not a lot, but some.

    This is what I like about AI research. We learn about our own minds while studying it. But capitalism isn’t using it in ways that are net helpful to humanity.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      The models definitely have some level of consciousness.

      Depends on what one means by consciousness. The way I hear the term used most often - and how I use it myself - is to describe the fact of subjective experience. That it feels like something to be.

      While I can’t definitively argue that none of our current AI systems are conscious to any degree, I’d still say that’s the case with extremely high probability. There’s just no reason to assume it feels like anything to be one of these systems, based on what we know about how they function under the hood.

  • GaMEChld@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    AI is being trained on human output. Has anyone given thought to how deranged human output is?

    • oyenyaaow@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 months ago

      techbros: look at the em-dashes! Only AI’s give out that output

      meanwhile in casually-dropping-50k-words-recreationally-land:

      AI are trained on real world data, just not their data.

  • mojofrododojo@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    we don’t let AI sleep. of course it’s growing psychotic.

    just turn the shit off for a few days and see if it helps. you’d want a nap too.