• nova_ad_vitum@lemmy.ca
    link
    fedilink
    English
    arrow-up
    22
    ·
    2 days ago

    Gotham chess has a video of making chatgpt play chess against stockfish. Spoiler: chatgpt does not do well. It plays okay for a few moves but then the moment it gets in trouble it straight up cheats. Telling it to follow the rules of chess doesn’t help.

    This sort of gets to the heart of LLM-based “AI”. That one example to me really shows that there’s no actual reasoning happening inside. It’s producing answers that statistically look like answers that might be given based on that input.

    For some things it even works. But calling this intelligence is dubious at best.

    • Ultraviolet@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 days ago

      Because it doesn’t have any understanding of the rules of chess or even an internal model of the game state, it just has the text of chess games in its training data and can reproduce the notation, but nothing to prevent it from making illegal moves, trying to move or capture pieces that don’t exist, incorrectly declaring check/checkmate, or any number of nonsensical things.

    • JacksonLamb@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      ChatGPT versus Deepseek is hilarious. They both cheat like crazy and then one side jedi mind tricks the winner into losing.

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I think the biggest problem is it’s very low ability to “test time adaptability”. Even when combined with a reasonning model outputting into its context, the weights do not learn out of the immediate context.

      I think the solution might be to train a LoRa overlay on the fly against the weights and run inference with that AND the unmodified weights and then have an overseer model self evaluate and recompose the raw outputs.

      Like humans are way better at answering stuff when it’s a collaboration of more than one person. I suspect the same is true of LLMs.

      • nednobbins@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Like humans are way better at answering stuff when it’s a collaboration of more than one person. I suspect the same is true of LLMs.

        It is.

        It’s really common for non-language implementations of neural networks. If you have an NN that’s right some percentage of the time, you can often run it through a bunch of copies of the NNs and take the average and that average is correct a higher percentage of the time.

        Aider is an open source AI coding assistant that lets you use one model to plan the coding and a second one to do the actual coding. It works better than doing it in a single pass, even if you assign the the same model to planing and coding.