• Jimmycrackcrack@lemmy.ml
    link
    fedilink
    arrow-up
    39
    ·
    13 days ago

    I realise the dumbass here is the guy saying programmers are ‘cooked’, but there’s something kind of funny how the programmer talks about how people misunderstand the complexities of their job and how LLMs easily make mistakes because of an inability to understand the nuances of what he does everyday and understands deeply. They rightly point out how without their specialist oversight, AI agents would fail in ridiculous and spectacular ways, yet happily and vaguely adds as a throw away statement at the end “replacing other industries, sure.” with the exact same blitheness and lack of personal understanding with which ‘Ace’ proclaims all programmers cooked.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      10
      ·
      13 days ago

      I find this is a really common trope where people appreciate the complexity of the domain they work in, but assume every other domain is trivial by comparison.

      • HiddenLayer555@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        12 days ago

        There’s a saying in Mandarin that translates to something like: Being in different professions is like being on opposite sides of a mountain. It basically means you can never fully understand a given profession unless you’re actually doing it.

      • SkyeStarfall@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        3
        ·
        13 days ago

        The more you take an interest in and try to learn at least a little about basically any other single topic, you’ll quickly realize that nearly everything has a near infinite complexity and depth

        The rabbit holes are endless, there’s so much knowledge to have everywhere. The people who don’t appreciate this fact and say stuff like in the screenshot are a bit foolish, I think

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    32
    ·
    14 days ago

    It’s like a conspiracy theory for that guy. Everyone who tells them it’s not true that you can get rid of programmers, has to be a programmer, and therefore cannot be trusted.

    • moseschrute@lemmy.ml
      link
      fedilink
      English
      arrow-up
      22
      ·
      14 days ago

      To be fair, we should probably all start migrating to cybersecurity positions. They’ll need it when they discover how many vulnerabilities were created by all the non-programmers vibe coding.

      • bountygiver [any]@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        13 days ago

        And will be a good time to start doing them for nefarious purposes, in particular hit it where it hurts, the company’s balance books, so that it actually starts driving demand for actually fixing them.

  • roux [he/him, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    16
    ·
    13 days ago

    My favorite thing about ChatGPT is telling it consistently I’m writing stuff in AstroJS and not ReactJS.

    So, uh where’s this link for a junior position starting out at $145k? Asking for a friend…

    • crusa187@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      13 days ago

      Agents are supposed to be self-sustaining automated services, this is a misappropriation of the term but perhaps it doesn’t matter as it’s all novel tech anyway. Marketing folks going to do their thing.

  • CapriciousDay@lemmy.ml
    link
    fedilink
    English
    arrow-up
    11
    ·
    13 days ago

    I don’t know if they quite appreciate how if programmers are cooked like this, everyone who isn’t a billionaire is too. Let me introduce you to robot transformer models.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      7
      ·
      13 days ago

      Exactly, to eliminate the need for programmers you would need AGI, and that would simply mean the end of capitalism because at that point any job a human does can be automated.

      • WhatsTheHoldup@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        13 days ago

        Ah but I’ll simply ask ChatGPT to generate me a job.

        Personally I haven’t gotten any jobs this way but that doesn’t mean employers aren’t cooked

        They are 👍

  • verdigris@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    13 days ago

    The problem is that too many execs are thinking like this guy. It’s not actually tenable to replace programmers with AI, but people who aren’t programmers are less likely to understand that.

  • Anna@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    13 days ago

    When AI can sit through dozen meeting discussing stupid things only to finalize whatever you had decided earlier then I’ll be worried

    • CarrotsHaveEars@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      12 days ago

      Personally I would happily let my AI bot attend the stupid scrum meetings for me. Let it tell my scrum master and stakeholders whatever the progress of my day of work and in the sprint. Don’t bother me in my coding time.

      • balsoft@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        12 days ago

        We made a (so far internal) tool at work that takes your activity from Github, your calendar, and the issue tracker, feeds that to a local LLM, which spits out a report of what you have been doing for the week. It messes up sometimes, but speeds up the process of writing the report dramatically. This is one of those cases where an LLM actually fits.

        • CarrotsHaveEars@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          11 days ago

          You know what they say: (⁠☞⁠ ͡⁠°⁠ ͜⁠ʖ⁠ ͡⁠°⁠)⁠☞Fight executive’s bullshit with executive’s bullshit.

  • hexaflexagonbear [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    13 days ago

    I was making chatgpt do some tedious thing and I kept telling it “you got X wrong” and it kept going “oh you’re right I got X wrong, I will not do that again” and giving the exact same output. lol the one time ChatGPT was giving me consistent outputs for the same prompt

    • GnuLinuxDude@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      13 days ago

      Just yesterday I asked Llama 3.3 70B params how to do something. I was pretty sure it wouldn’t be able to tell me the right command to run because I knew beforehand I was asking it something really obscure about how to use tar. I gave it all the relevant details. Imagine my surprise when it… told me the blatantly wrong thing. It even invented useless ways of running the command incorrectly.

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    13 days ago

    LLMs can’t even stay on topic when specifically being asked to solve one problem.

    This happens to me all the damn time:

    I paste a class that references some other classes which I have already tested to be working, my problem is in a specific method that doesn’t directly call on any of the other classes. I tell the LLM specifically which method is not working, I also tell it that I have tested all the other methods and they work as intended (complete with comments documenting what they’re supposed to do). I then ask the LLM to only focus on the method I have specified, and it still goes on about “have you implemented all the other classes this class references? Here’s my shitty implementation of those classes instead.”

    So then I paste all the classes that the one I’m asking about depends on, reiterate that all of them have been tested and are working, tell the LLM which method has the problem again, and it still decides that my problem must be in the other classes and starts “fixing” them which 9 out of 10 times is just rearranging the code that I already wrote and fucking up the organisation that I had designed.

    It’s somewhat useful for searching for well-known example code using natural language, i.e. “How do I open a network socket using Rust,” or if your problem is really simple. Maybe it’s just the specific LLM I use, but in my experience it can’t actually problem solve better than humans.

    • zalgotext@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      13 days ago

      Yeah I find LLMs most useful to basically read the docs for me and provide it’s own sample/pseudocode. If it goes off the rails, I have to guide it back myself using natural language. Even then though it’s still just a tool that gets me going in the right direction, or helps me consider alternative solutions buried in the docs that I might have skimmed over. Rarely does it produce code that I can actually use in my project.