• jj4211@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 days ago

    To be fair, a decent chunk of coding is stupid boilerplate/minutia that varies environment to environment, language to language, library to library.

    So LLM can do some code completion, filling out a bunch of boilerplate that is blatantly obvious, generating the redundant text mandated by certain patterns, and keeping straight details between languages like “does this language want join as a method on a list with a string argument, or vice versa?”

    Problem is this can be sometimes more annoying than it’s worth, as miscompletions are annoying.

    • PushButton@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Fair point.

      I liked the “upgraded autocompletion”, you know, an completion based on the context, just before the time that they pushed it too much with 20 lines of non sense…

      Now I am thinking of a way of doing the thing, then I receive a 20 lines suggestion.

      So I am checking if that make sense, losing my momentum, only to realize the suggestion us calling shit that don’t exist…

      Screw that.

      • merdaverse@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        The amount of garbage it spits out in autocomplete is distracting. If it’s constantly making me 5-10% less productive the many times it’s wrong, it should save me a lot of time when it is right, and generally, I haven’t found it able to do that.

        Yesterday I tried to prompt it to change around 20 call sites for a function where I had changed the signature. Easy, boring and repetitive, something that a junior could easily do. And all the models were absolutely clueless about it (using copilot)