LOOK MAA I AM ON FRONT PAGE

  • SoftestSapphic@lemmy.world
    link
    fedilink
    English
    arrow-up
    74
    arrow-down
    1
    ·
    2 days ago

    Wow it’s almost like the computer scientists were saying this from the start but were shouted over by marketing teams.

    • zbk@lemmy.ca
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      2
      ·
      2 days ago

      This! Capitalism is going to be the end of us all. OpenAI has gotten away with IP Theft, disinformation regarding AI and maybe even murder of their whistle blower.

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    4
    ·
    2 days ago

    When are people going to realize, in its current state , an LLM is not intelligent. It doesn’t reason. It does not have intuition. It’s a word predictor.

    • x0x7@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      edit-2
      2 days ago

      Intuition is about the only thing it has. It’s a statistical system. The problem is it doesn’t have logic. We assume because its computer based that it must be more logic oriented but it’s the opposite. That’s the problem. We can’t get it to do logic very well because it basically feels out the next token by something like instinct. In particular it doesn’t mask or disconsider irrelevant information very well if two segments are near each other in embedding space, which doesn’t guarantee relevance. So then the model is just weighing all of this info, relevant or irrelevant to a weighted feeling for the next token.

      This is the core problem. People can handle fuzzy topics and discrete topics. But we really struggle to create any system that can do both like we can. Either we create programming logic that is purely discrete or we create statistics that are fuzzy.

      Of course this issue of masking out information that is close in embedding space but is irrelevant to a logical premise is something many humans suck at too. But high functioning humans don’t and we can’t get these models to copy that ability. Too many people, sadly many on the left in particular, not only will treat association as always relevant but sometimes as equivalence. RE racism is assoc with nazism is assoc patriarchy is historically related to the origins of capitalism ∴ nazism ≡ capitalism. While national socialism was anti-capitalist. Associative thinking removes nuance. And sadly some people think this way. And they 100% can be replaced by LLMs today, because at least the LLM is mimicking what logic looks like better though still built on blind association. It just has more blind associations and finetune weighting for summing them. More than a human does. So it can carry that to mask as logical further than a human who is on the associative thought train can.

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        They want something like the Star Trek computer or one of Tony Stark’s AIs that were basically deus ex machinas for solving some hard problem behind the scenes. Then it can say “model solved” or they can show a test simulation where the ship doesn’t explode (or sometimes a test where it only has an 85% chance of exploding when it used to be 100%, at which point human intuition comes in and saves the day by suddenly being better than the AI again and threads that 15% needle or maybe abducts the captain to go have lizard babies with).

        AIs that are smarter than us but for some reason don’t replace or even really join us (Vision being an exception to the 2nd, and Ultron trying to be an exception to the 1st).

    • SaturdayMorning@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      I agree with you. In its current state, LLM is not sentient, and thus not “Intelligence”.

      • MouldyCat@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I think it’s an easy mistake to confuse sentience and intelligence. It happens in Hollywood all the time - “Skynet began learning at a geometric rate, on July 23 2004 it became self-aware” yadda yadda

        But that’s not how sentience works. We don’t have to be as intelligent as Skynet supposedly was in order to be sentient. We don’t start our lives as unthinking robots, and then one day - once we’ve finally got a handle on calculus or a deep enough understanding of the causes of the fall of the Roman empire - we suddenly blink into consciousness. On the contrary, even the stupidest humans are accepted as being sentient. Even a young child, not yet able to walk or do anything more than vomit on their parents’ new sofa, is considered as a conscious individual.

        So there is no reason to think that AI - whenever it should be achieved, if ever - will be conscious any more than the dumb computers that precede it.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      And that’s pretty damn useful, but obnoxious to have expectations wildly set incorrectly.

  • Mniot@programming.dev
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    2 days ago

    I don’t think the article summarizes the research paper well. The researchers gave the AI models simple-but-large (which they confusingly called “complex”) puzzles. Like Towers of Hanoi but with 25 discs.

    The solution to these puzzles is nothing but patterns. You can write code that will solve the Tower puzzle for any size n and the whole program is less than a screen.

    The problem the researchers see is that on these long, pattern-based solutions, the models follow a bad path and then just give up long before they hit their limit on tokens. The researchers don’t have an answer for why this is, but they suspect that the reasoning doesn’t scale.

  • minoscopede@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    2
    ·
    edit-2
    2 days ago

    I see a lot of misunderstandings in the comments 🫤

    This is a pretty important finding for researchers, and it’s not obvious by any means. This finding is not showing a problem with LLMs’ abilities in general. The issue they discovered is specifically for so-called “reasoning models” that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

    Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that’s a flaw that needs to be corrected before models can actually reason.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      5
      ·
      2 days ago

      When given explicit instructions to follow models failed because they had not seen similar instructions before.

      This paper shows that there is no reasoning in LLMs at all, just extended pattern matching.

    • theherk@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      5
      ·
      2 days ago

      Yeah these comments have the three hallmarks of Lemmy:

      • AI is just autocomplete mantras.
      • Apple is always synonymous with bad and dumb.
      • Rare pockets of really thoughtful comments.

      Thanks for being at least the latter.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      edit-2
      2 days ago

      What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it’s no longer reasoning? I feel like at this point a more relevant question is “What exactly is reasoning?”. Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.

      https://en.wikipedia.org/wiki/Reasoning_system

      • stickly@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 days ago

        If you want to boil down human reasoning to pattern recognition, the sheer amount of stimuli and associations built off of that input absolutely dwarfs anything an LLM will ever be able to handle. It’s like comparing PhD reasoning to a dog’s reasoning.

        While a dog can learn some interesting tricks and the smartest dogs can solve simple novel problems, there are hard limits. They simply lack a strong metacognition and the ability to make simple logical inferences (eg: why they fail at the shell game).

        Now we make that chasm even larger by cutting the stimuli to a fixed token limit. An LLM can do some clever tricks within that limit, but it’s designed to do exactly those tricks and nothing more. To get anything resembling human ability you would have to design something to match human complexity, and we don’t have the tech to make a synthetic human.

    • AbuTahir@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Cognitive scientist Douglas Hofstadter (1979) showed reasoning emerges from pattern recognition and analogy-making - abilities that modern AI demonstrably possesses. The question isn’t if AI can reason, but how its reasoning differs from ours.

    • Tobberone@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      2 days ago

      What statistical method do you base that claim on? The results presented match expectations given that Markov chains are still the basis of inference. What magic juice is added to “reasoning models” that allow them to break free of the inherent boundaries of the statistical methods they are based on?

      • minoscopede@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        I’d encourage you to research more about this space and learn more.

        As it is, the statement “Markov chains are still the basis of inference” doesn’t make sense, because markov chains are a separate thing. You might be thinking of Markov decision processes, which is used in training RL agents, but that’s also unrelated because these models are not RL agents, they’re supervised learning agents. And even if they were RL agents, the MDP describes the training environment, not the model itself, so it’s not really used for inference.

        I mean this just as an invitation to learn more, and not pushback for raising concerns. Many in the research community would be more than happy to welcome you into it. The world needs more people who are skeptical of AI doing research in this field.

        • Tobberone@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 hours ago

          Which method, then, is the inference built upon, if not the embeddings? And the question still stands, how does “AI” escape the inherent limits of statistical inference?

  • Nanook@lemm.ee
    link
    fedilink
    English
    arrow-up
    166
    arrow-down
    11
    ·
    3 days ago

    lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

    • Melvin_Ferd@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      20
      ·
      3 days ago

      This is why I say these articles are so similar to how right wing media covers issues about immigrants.

      There’s some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They’re taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There’s articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.

      Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    2 days ago

    What’s hilarious/sad is the response to this article over on reddit’s “singularity” sub, in which all the top comments are people who’ve obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don’t understand AI or “reasoning”. It’s a weird cult.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      Without being explicit with well researched material, then the marketing presentation gets to stand largely unopposed.

      So this is good even if most experts in the field consider it an obvious result.

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    2 days ago

    It’s all “one instruction at a time” regardless of high processor speeds and words like “intelligent” being bandied about. “Reason” discussions should fall into the same query bucket as “sentience”.

  • RampantParanoia2365@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    6
    ·
    edit-2
    2 days ago

    Fucking obviously. Until Data’s positronic brains becomes reality, AI is not actual intelligence.

    AI is not A I. I should make that a tshirt.

  • GaMEChld@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    15
    ·
    2 days ago

    Most humans don’t reason. They just parrot shit too. The design is very human.

    • El Barto@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      4
      ·
      2 days ago

      LLMs deal with tokens. Essentially, predicting a series of bytes.

      Humans do much, much, much, much, much, much, much more than that.

    • joel_feila@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      Thata why ceo love them. When your job is 90% spewing bs a machine that does that is impressive

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      4
      ·
      2 days ago

      I hate this analogy. As a throwaway whimsical quip it’d be fine, but it’s specious enough that I keep seeing it used earnestly by people who think that LLMs are in any way sentient or conscious, so it’s lowered my tolerance for it as a topic even if you did intend it flippantly.

      • GaMEChld@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 hours ago

        I don’t mean it to extol LLM’s but rather to denigrate humans. How many of us are self imprisoned in echo chambers so we can have our feelings validated to avoid the uncomfortable feeling of thinking critically and perhaps changing viewpoints?

        Humans have the ability to actually think, unlike LLM’s. But it’s frightening how far we’ll go to make sure we don’t.

    • SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      8
      ·
      2 days ago

      Yeah I’ve always said the the flaw in Turing’s Imitation Game concept is that if an AI was indistinguishable from a human it wouldn’t prove it’s intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.

      • jnod4@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        2 days ago

        I think that person had to choose between the drugs or hard core prison of the 1950s England where being a bit odd was enough to guarantee an incredibly difficult time as they say in England, I would’ve chosen the drugs as well hoping they would fix me, too bad without testosterone you’re going to be suicidal and depressed, I’d rather choose to keep my hair than to be horny all the time

      • Zenith@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        2 days ago

        Yeah we’re so stupid we’ve figured out advanced maths, physics, built incredible skyscrapers and the LHC, we may as individuals be less or more intelligent but humans as a whole are incredibly intelligent

  • Jhex@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    6
    ·
    3 days ago

    this is so Apple, claiming to invent or discover something “first” 3 years later than the rest of the market