• merthyr1831@lemmy.ml
    link
    fedilink
    English
    arrow-up
    117
    ·
    16 days ago

    AI is yet another technology that enables morons to think they can cut out the middleman of programming staff, only to very quickly realise that we’re more than just monkeys with typewriters.

  • Hilarious and true.

    last week some new up and coming coder was showing me their tons and tons of sites made with the help of chatGPT. They all look great on the front end. So I tried to use one. Error. Tried to use another. Error. Mentioned the errors and they brushed it off. I am 99% sure they do not have the coding experience to fix the errors. I politely disconnected from them at that point.

    What’s worse is when a noncoder asks me, a coder, to look over and fix their ai generated code. My response is “no, but if you set aside an hour I will teach you how HTML works so you can fix it yourself.” Never has one of these kids asking ai to code things accepted which, to me, means they aren’t worth my time. Don’t let them use you like that. You aren’t another tool they can combine with ai to generate things correctly without having to learn things themselves.

    • MyNameIsIgglePiggle@sh.itjust.works
      link
      fedilink
      arrow-up
      24
      arrow-down
      1
      ·
      16 days ago

      I’ve been a professional full stack dev for 15 years and dabbled for years before that - I can absolutely code and know what I’m doing (and have used cursor and just deleted most of what it made for me when I let it run)

      But my frontends have never looked better.

  • rtxn@lemmy.world
    link
    fedilink
    arrow-up
    78
    ·
    16 days ago

    “If you don’t have organic intelligence at home, store-bought is fine.” - leo (probably)

  • Electric@lemmy.world
    link
    fedilink
    arrow-up
    47
    ·
    16 days ago

    Is the implication that he made a super insecure program and left the token for his AI thing in the code as well? Or is he actually being hacked because others are coping?

    • grue@lemmy.world
      link
      fedilink
      arrow-up
      116
      ·
      16 days ago

      Nobody knows. Literally nobody, including him, because he doesn’t understand the code!

    • Mayor Poopington@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      16 days ago

      AI writes shitty code that’s full of security holes, and Leo here has probably taken zero steps to further secure his code. He broadcasts his AI written software and its open season for hackers.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        16 days ago

        Not just, but he literally advertised himself as not being technical. That seems to be just asking for an open season.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      16 days ago

      Potentially both, but you don’t really have to ask to be hacked. Just put something into the public internet and automated scanning tools will start checking your service for popular vulnerabilities.

    • JustAnotherKay@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      16 days ago

      He told them which AI he used to make the entire codebase. I’d bet it’s way easier to RE the “make a full SaaS suite” prompt than it is to RE the code itself once it’s compiled.

      Someone probably poked around with the AI until they found a way to abuse his SaaS

    • RedditWanderer@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      16 days ago

      Doesn’t really matter. The important bit is he has no idea either. (It’s likely the former and he’s blaming the weirdos trying to get in)

  • rekabis@programming.dev
    link
    fedilink
    arrow-up
    44
    ·
    16 days ago

    The fact that “AI” hallucinates so extensively and gratuitously just means that the only way it can benefit software development is as a gaggle of coked-up juniors making a senior incapable of working on their own stuff because they’re constantly in janitorial mode.

    • Devanismyname@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      10
      ·
      16 days ago

      It’ll just keep better at it over time though. The current ai is way better than 5 years ago and in 5 years it’ll be way better than now.

      • almost1337@lemm.ee
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        16 days ago

        That’s certainly one theory, but as we are largely out of training data there’s not much new material to feed in for refinement. Using AI output to train future AI is just going to amplify the existing problems.

        • Devanismyname@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          10
          ·
          16 days ago

          I mean, the proof is sitting there wearing your clothes. General intelligence exists all around us. If it can exist naturally, we can eventually do it through technology. Maybe there needs to be more breakthroughs before it happens.

            • mindbleach@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              arrow-down
              4
              ·
              16 days ago

              I mean - have you followed AI news? This whole thing kicked off maybe three years ago, and now local models can render video and do half-decent reasoning.

              None of it’s perfect, but a lot of it’s fuckin’ spooky, and any form of “well it can’t do [blank]” has a half-life.

              • SaraTonin@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                16 days ago

                If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.

                You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.

                The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.

                • mindbleach@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  edit-2
                  14 days ago

                  We don’t need leaps and bounds, from here. We’re already in science fiction territory. Incremental improvement has silenced a wide variety of naysaying.

                  And this is with LLMs - which are stupid. We didn’t design them with logic units or factoid databases. Anything they get right is an emergent property from guessing plausible words, and they get a shocking amount of things right. Smaller models and faster training will encourage experimentation for better fundamental goals. Like a model that can only say yes, no, or mu. A decade ago that would have been an impossible sell - but now we know data alone can produce a network that’ll fake its way through explaining why the answer is yes or no. If we’re only interested in the accuracy of that answer, then we’re wasting effort on the quality of the faking.

                  Even with this level of intelligence, where people still bicker about whether it is any level of intelligence, dumb tricks keep working. Like telling the model to think out loud. Or having it check its work. These are solutions an author would propose as comedy. And yet: it helps. It narrows the gap between “but right now it sucks at [blank]” and having to find a new [blank]. If that never lets it do math properly, well, buy a calculator.

          • Nalivai@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            15 days ago

            Everything possible in theory. Doesn’t mean everything happened or just about to happen

  • Charlxmagne@lemmy.world
    link
    fedilink
    arrow-up
    29
    ·
    16 days ago

    This is what happens when you don’t know what your own code does, you lose the ability to manage it, that is precisely why AI won’t take programmer’s jobs.

  • formulaBonk@lemm.ee
    link
    fedilink
    English
    arrow-up
    29
    ·
    16 days ago

    Reminds me of the days before ai assistants where people copy pasted code from forums and then you’d get quesitions like “I found this code and I know what every line does except this ‘for( int i = 0; i < 10; i ++)’ part. Is this someone using an unsupported expression?”

      • Moredekai@lemmy.world
        link
        fedilink
        arrow-up
        28
        ·
        16 days ago

        It’s a standard formatted for-loop. It’s creating the integer variable i, and setting it to zero. The second part is saying “do this while i is less than 10”, and the last part is saying what to do after the loop runs once -‐ increment i by 1. Under this would be the actual stuff you want to be doing in that loop. Assuming nothing in the rest of the code is manipulating i, it’ll do this 10 times and then move on

      • jqubed@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        16 days ago

        @Moredekai@lemmy.world posted a detailed explanation of what it’s doing, but just to chime in that it’s an extremely basic part of programming. Probably a first week of class if not first day of class thing that would be taught. I haven’t done anything that could be considered programming since 2002 and took my first class as an elective in high school in 2000 but still recognize it.

      • JustAnotherKay@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        16 days ago

        for( int i = 0; i < 10; i ++)

        This reads as “assign an integer to the variable I and put a 0 in that spot. Do the following code, and once completed add 1 to I. Repeat until I reaches 10.”

        Int I = 0 initiates I, tells the compiler it’s an integer (whole number) and assigns 0 to it all at once.

        I ++ can be written a few ways, but they all say “add 1 to I”

        I < 10 tells it to stop at 10

        For tells it to loop, and starts a block which is what will actually be looping

        Edits: A couple of clarifications

    • barsoap@lemm.ee
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      16 days ago

      i <= 9, you heathen. Next thing you’ll do is i < INT_MAX + 1 and then the shit’s steaming.

      I’m cooked, see thread.

        • barsoap@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          16 days ago

          I mean i < 10 isn’t wrong as such, it’s just good practice to always use <= because in the INT_MAX case you have to and everything should be regular because principle of least astonishment: That 10 might become a #define FOO 10, that then might become #define FOO INT_MAX, each of those changes look valid in isolation but if there’s only a single i < FOO in your codebase you introduced a bug by spooky action at a distance. (overflow on int is undefined behaviour in C, in case anyone is wondering what the bug is).

          …never believe anyone who says “C is a simple language”. Their code is shoddy and full of bugs and they should be forced to write Rust for their own good.

          • kevincox@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            16 days ago

            But your case is wrong anyways because i <= INT_MAX will always be true, by definition. By your argument < is actually better because it is consistent from < 0 to iterate 0 times to < INT_MAX to iterate the maximum number of times. INT_MAX + 1 is the problem, not < which is the standard to write for loops and the standard for a reason.

            • barsoap@lemm.ee
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              16 days ago

              You’re right, that’s what I get for not having written a line of C in what 15 years. Bonus challenge: write for i in i32::MIN..=i32::MAX in C, that is, iterate over the whole range, start and end inclusive.

              (I guess the ..= might be where my confusion came from because Rust’s .. is end-exclusive and thus like <, but also not what you want because i32::MAX + 1 panics).

  • Takumidesh@lemmy.world
    link
    fedilink
    arrow-up
    28
    arrow-down
    2
    ·
    16 days ago

    This is satire / trolling for sure.

    LLMs aren’t really at the point where they can spit out an entire program, including handling deployment, environments, etc. without human intervention.

    If this person is ‘not technical’ they wouldn’t have been able to successfully deploy and interconnect all of the pieces needed.

    The AI may have been able to spit out snippets, and those snippets may be very useful, but where it stands, it’s just not going to be able to, with no human supervision/overrides, write the software, stand up the DB, and deploy all of the services needed. With human guidance sure, but with out someone holding the AIs hand it just won’t happen (remember this person is ‘not technical’)

    • idk ive seen some crazy complicated stuff woven together by people who cant code. I’ve got a friend who has no job and is trying to make a living off coding while, for 15+ years being totally unable to learn coding. Some of the things they make are surprisingly complex. Tho also, and the person mentioned here may do similarly, they don’t ONLY use ai. They use Github alot too. They make nearly nothing themself, but go thru github and basically combine large chunks of code others have made with ai generated code. Somehow they do it well enough to have done things with servers, cryptocurrency, etc… all the while not knowing any coding language.

    • MyNameIsIgglePiggle@sh.itjust.works
      link
      fedilink
      arrow-up
      8
      ·
      16 days ago

      Claude code can make something that works, but it’s kinda over engineered and really struggles to make an elegant solution that maximises code reuse - it’s the opposite of DRY.

      I’m doing a personal project at the moment and used it for a few days, made good progress but it got to the point where it was just a spaghetti mess of jumbled code, and I deleted it and went back to implementing each component one at a time and then wiring them together manually.

      My current workflow is basically never let them work on more than one file at a time, and build the app one component at a time, starting at the ground level and then working in, so for example:

      Create base classes that things will extend, Then create an example data model class, iterate on that architecture A LOT until it’s really elegant.

      Then Ive been getting it to write me a generator - not the actual code for models,

      Then (level 3) we start with be UI.layer, so now we make a UI kit the app will use and reuse for different components

      Then we make a UI component that will be used in a screen. I’m using flutter as an example so It would be a stateless component

      We now write tests for the component

      Now we do a screen, and I import each of the components.

      It’s still very manual, but it’s getting better. You are still going to need a human cider, I think forever, but there are two big problems that aren’t being addressed because people are just putting their head in the sand and saying nah can’t do it, or the clown op in the post who thinks they can do it.

      1. Because dogs be clownin, the public perception of programming as a career will be devalued “I’ll just make it myself!” Or like my rich engineer uncle said to me when I was doing websites professionally - a 13 year old can just make a website, why would I pay you so much to do it. THAT FUCKING SUCKS. But a similar attitude has existed from people “I’ll just hire Indians”. This is bullshit, but perception is important and it’s going to require you to justify yourself for a lot more work.

      2. And this is the flip side good news. These skills you have developed - it’s is going to be SO MUCH FUCKING HARDER TO LEARN THEM. When you can just say “hey generate me an app that manages customers and follow ups” and something gets spat out, you aren’t going to investigate the grind required to work out basic shit. People will simply not get to the same level they are now.

      That logic about how to scaffold and architect an app in a sensible way - USING AI TOOLS - is actually the new skillset. You need to know how to build the app, and then how to efficiently and effectively use the new tools to actually construct it. Then you need to be able to do code review for each change.

      </rant>

      • hubobes@sh.itjust.works
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        16 days ago

        How? We try to adopt AI for dev work for years now and every time the next gen tool or model gets released it fails spectacularly at basic things. And that’s just the technical stuff, I still have no idea on how to tell it do implement our use cases as it simply does not understand the domain.

        It is great at building things other have already built and it could train on but we don’t really have a use case for that.

      • Takumidesh@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        16 days ago

        I’m skeptical. You are saying that your team has no hand in the provisioning and you deputized an AI with AWS keys and just let it run wild?

    • Tja@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      16 days ago

      Might be satire, but I think some “products based on LLMs” (not LLMs alone) would be able to. There’s pretty impressive demos out there, but honestly haven’t tried them myself.

    • qaz@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      16 days ago

      It’s further than you think. I spoke to someone today about and he told me it produced a basic SaaS app for him. He said that it looked surprisingly okay and the basic functionalities actually worked too. He did note that it kept using deprecated code, consistently made a few basic mistakes despite being told how to avoid it, and failed to produce nontrivial functionalies.

      He did say that it used very common libraries and we hypothesized that it functioned well because a lot of relevant code could be found on GitHub and that it might function significantly worse when encountering less popular frameworks.

      Still it’s quite impressive, although not surprising considering it was a matter of time before people would start to feed the feedback of an IDE back into it.

    • iAvicenna@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      16 days ago

      My impression is that with some guidance it can put together a basic skeleton of complex stuff too. But you need a specialist level of knowledge to fix the fail at compile level mistakes or worse yet mistakes that compile but don’t at all achieve the intended result. To me it has been most useful at getting the correct arguments for argument heavy libraries like plotly, remembering how to do stuff in bash or learning something from scratch like 3js. Soon as you try to do something more complex than it can handle, it confidently starts cycling through the same couple of mistakes over and over. The key words it spews in those mistakes can sometimes be helpful to direct your search online though.

      So it has the potential to be helpful to a programmer but it cant yet replace programmers as tech bros like to fantasize about.

  • Phoenicianpirate@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    ·
    16 days ago

    I took a web dev boot camp. If I were to use AI I would use it as a tool and not the motherfucking builder! AI gets even basic math equations wrong!

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    23
    ·
    16 days ago

    An otherwise meh article concluded with “It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience.”

    Much as we want to point and laugh - this is not some loon’s fantasy. This is happening. Some dingus told spicy autocomplete ‘make me a database!’ and it did. It’s surely as exploit-hardened as a wet paper towel, but it functions. Largely as a demonstration of Kernighan’s law.

    This tech is borderline miraculous, even if it’s primarily celebrated by the dumbest motherfuckers alive. The generation and the debugging will inevitably improve to where the machine is only as bad at this as we are. We will be left with the hard problem of deciding what the software is supposed to do.

    • HiddenLayer555@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      16 days ago

      It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience.

      The years of specialized education and experience is not for writing code in and of itself. Anyone with an internet connection can learn to do that in not that long. What takes years to perfect is writing reliable, optimized, secure code, communicating and working efficiently with others, writing code that can be maintained by others long after you leave, knowing the theories behind why code written in a certain way works better than code written in some other way, and knowing the qualitative and quantitative measures to even be able to assess whether one piece of code is “better” than the other. Source: Self-learned programming, started building stuff on my own, and then went through an actual computer science program. You miss so much nuance and underlying theory when you self-learn, which directly translates bad code that’s a nightmare to maintain.

      Finally, the most important thing you can do with the person that has years of specialized education and experience is you can actually have a conversation with them about their code, ask them to explain in detail how it works and the process they used to write it. Then you can ask them followup questions and request further clarification. Trying to get AI to explain itself is a complete shitshow, and while humans do have a propensity to make shit up to cover their own/their coworkers’ asses, AI does that even when it make no sense not to tell the truth because it doesn’t really know what “the truth” is and why other people would want it.

      Will AI eventually catch up? Almost certainly, but we’re nowhere close to that right now. Currently it’s less like an actual professional developer and more like someone who knows just enough to copy paste snippets from Stack Overflow and hack them together into a program that manages to compile.

      I think the biggest takeaway with AI programming is not that it can suddenly do just as well as someone with years of specialized education and experience, but that we’re going to get a lot more shitty software that look professional on the surface, but is a dumpster fire inside.

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        16 days ago

        Self-learned programming, started building stuff on my own, and then went through an actual computer science program.

        Same. Starting with QBASIC, no less, which is an excellent source of terrible practices. At one point I created a code snippet that would perform a division and multiplication to find the remainder, because I’d never heard of modulo. Or functions.

        Right now, this lets people skip the hair-pulling syntax errors, and tell the computer what they think the program should be doing, in plain English. It’s not even “compileable pseudocode.” It’s high-level logic, nearly to the point that logic errors are all that can remain. It desperately needs some non-answer feedback states for if you tell it to “implement MP4 encoding” and expect that to Just Work.

        But it’s teaching people to write the comments first.

        we’re nowhere close to that right now.

        The distance from here to “oh shit” is shorter than we’d prefer. This tech works like a joke. “Chain of thought” apparently means telling the robot to act smarter… and it does. Which is almost less silly than Stable Diffusion removing every part of the marble that doesn’t look like Hatsune Miku. If it’s stupid, but it works… it’s still stupid. But it works.

        Someone’s gonna prompt “Write like Donald Knuth” and the robot’s gonna go, “Oh, you wanted good code? Why didn’t you say so.”

  • Dojan@lemmy.world
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    15 days ago

    Was listening to my go-to podcast during morning walkies with my dog. They brought up an example where some couple was using ShatGPT as a couple’s therapist, and what a great idea that was. Talking about how one of the podcasters has more of a friend like relationship to “their” GPT.

    I usually find this podcast quite entertaining, but this just got me depressed.

    ChatGPT is by the same company that stole Scarlett Johansson’s voice. The same vein of companies that thinks it’s perfectly okay to pirate 81 terabytes of books, despite definitely being able to afford paying the authors. I don’t see a reality where it’s ethical or indicative of good judgement to trust a product from any of these companies with information.

    • Bazoogle@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      15 days ago

      I agree with you, but I do wish a lot of conservatives used chatGPT or other AI’s more. It, at the very least, will tell them all the batshit stuff they believe is wrong and clear up a lot of the blatant misinformation. With time, will more batshit AI’s be released to reinforce their current ideas? Yea. But ChatGPT is trained on enough (granted, stolen) data that it isn’t prone to retelling the conspiracy theories. Sure, it will lie to you and make shit up when you get into niche technical subjects, or ask it to do basic counting, but it certainly wouldn’t say Ukraine started the war.

      • ZMoney@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        15 days ago

        It will even agree that AIs shouldn’t controlled by oligarchic tech monopolies and should instead be distributed freely and fairly for the public good, but the international system of nation states competing against each other militarily and economically prevents this. But maybe it will agree to the opposite of that too, I didn’t try asking.

  • bitjunkie@lemmy.world
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    15 days ago

    AI can be incredibly useful, but you still need someone with the expertise to verify its output.