Experienced software developer, here. “AI” is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I’m not worried about clobbering existing code) and I don’t want to do it by hand, it saves me time.
And… that’s about it. It sucks at code review, and will break shit in your repo if you let it.
Same. I also like it for basic research and helping with syntax for obscure SQL queries, but coding hasn’t worked very well. One of my less technical coworkers tried to vibe code something and it didn’t work well. Maybe it would do okay on something routine, but generally speaking it would probably be better to use a library for that anyway.
I actively hate the term “vibe coding.” The fact is, while using an LLM for certain tasks is helpful, trying to build out an entire, production-ready application just by prompts is a huge waste of time and is guaranteed to produce garbage code.
At some point, people like your coworker are going to have to look at the code and work on it, and if they don’t know what they’re doing, they’ll fail.
I commend them for giving it a shot, but I also commend them for recognizing it wasn’t working.
Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.
In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.
Like I said, I do find it useful at times. But not only shouldn’t it replace coders, it fundamentally can’t. At least, not without a fundamental rearchitecturing of how they work.
The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.
On top of that, spoken and written language are very imprecise, and there’s no way for an LLM to derive what you really wanted from context clues such as your tone of voice.
Take the phrase “fruit flies like a banana.” Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?
It’s a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we’ve got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don’t.
Sometimes I get an LLM to review a patch series before I send it as a quick once over. I would estimate about 50% of the suggestions are useful and about 10% are based on “misunderstanding”. Last week it was suggesting a spelling fix I’d already made because it didn’t understand the - in the diff meant I’d changed the line already.
Huh? I’m definitely not hyping AI. If anything it would be the opposite. We’re also literally in the comment section for an a study about AI productivity which is the first remotely reputable study I’ve even seen. The rest have been rigged marketing stunts. As far as judging my opinion about the productivity of AI against junior developers against studies, why don’t you bring me one that isn’t “we made an artificial test then directly trained our LLM on the questions so it will look good for investors”? I’ll wait.
Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…
They might become seniors for 99% more investment. Or they crash out as “not a great fit” which happens too. Juniors aren’t just “senior seeds” to be planted
Experienced software developer, here. “AI” is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I’m not worried about clobbering existing code) and I don’t want to do it by hand, it saves me time.
And… that’s about it. It sucks at code review, and will break shit in your repo if you let it.
Same. I also like it for basic research and helping with syntax for obscure SQL queries, but coding hasn’t worked very well. One of my less technical coworkers tried to vibe code something and it didn’t work well. Maybe it would do okay on something routine, but generally speaking it would probably be better to use a library for that anyway.
I actively hate the term “vibe coding.” The fact is, while using an LLM for certain tasks is helpful, trying to build out an entire, production-ready application just by prompts is a huge waste of time and is guaranteed to produce garbage code.
At some point, people like your coworker are going to have to look at the code and work on it, and if they don’t know what they’re doing, they’ll fail.
I commend them for giving it a shot, but I also commend them for recognizing it wasn’t working.
Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.
In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.
Like I said, I do find it useful at times. But not only shouldn’t it replace coders, it fundamentally can’t. At least, not without a fundamental rearchitecturing of how they work.
The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.
On top of that, spoken and written language are very imprecise, and there’s no way for an LLM to derive what you really wanted from context clues such as your tone of voice.
Take the phrase “fruit flies like a banana.” Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?
It’s a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we’ve got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don’t.
Everyone on Lemmy is a software developer.
Sometimes I get an LLM to review a patch series before I send it as a quick once over. I would estimate about 50% of the suggestions are useful and about 10% are based on “misunderstanding”. Last week it was suggesting a spelling fix I’d already made because it didn’t understand the - in the diff meant I’d changed the line already.
Exactly what you would expect from a junior engineer.
Let them run unsupervised and you have a mess to clean up. Guide them with context and you’ve got a second set of capable hands.
Something something craftsmen don’t blame their tools
AI tools are way less useful than a junior engineer, and they aren’t an investment that turns into a senior engineer either.
Is “way less useful” something you can cite with a source, or is that just feelings?
It is based on my experience, which I trust immeasurably more than rigged “studies” done by the big LLM companies with clear conflict of interest.
Okay, but like-
You could just be lying.
You could even be a chatbot, programmed to hype AI in comments sections.
So I’m going to trust studies, not some anonymous commenter on the internet who says “trust me bro!”
Huh? I’m definitely not hyping AI. If anything it would be the opposite. We’re also literally in the comment section for an a study about AI productivity which is the first remotely reputable study I’ve even seen. The rest have been rigged marketing stunts. As far as judging my opinion about the productivity of AI against junior developers against studies, why don’t you bring me one that isn’t “we made an artificial test then directly trained our LLM on the questions so it will look good for investors”? I’ll wait.
Understood, thanks for being honest
Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…
The point is that comparing AI tools to junior engineers is ridiculous in the first place. It is simply marketing.
Even at $100/month you’re comparing to a > $10k/month junior. 1% of the cost for certainly > 1% functionality of a junior.
You can see why companies are tripping over themselves to push this new modality.
I was just ballparking the salary. Say it’s only 100x. Does the argument change? It’s a lot more money to pay for a real person.
Wasn’t it clear that our comments are in agreement?
It wasn’t, but now it is.
❤️
The difference being junior engineers eventually grow up into senior engineers.
Does every junior eventually achieve becoming a senior?
No, but that’s the only way you get senior engineers!
I agree, but the goal of CEOs is “line go up,” not make our eng team stronger (usually)
Capitalism, shortsighted? Say it ain’t so!
Except junior engineers become seniors. If you don’t understand this … are you HR?
They might become seniors for 99% more investment. Or they crash out as “not a great fit” which happens too. Juniors aren’t just “senior seeds” to be planted