I work for an adtech company and im pretty much the only developer for the javascript library that runs on client sites and shows our ads. I dont use AI at all because it keeps generating crap
Explain this too me AI. Reads back exactly what’s on the screen including comments somehow with more words but less information Ok…
Ok, this is tricky. AI, can you do this refactoring so I don’t have to keep track of everything. No… Thats all wrong… Yeah I know it’s complicated, that’s why I wanted it refactored. No you can’t do that… fuck now I can either toss all your changes and do it myself or spend the next 3 hours rewriting it.
Yeah I struggle to find how anyone finds this garbage useful.
This was the case a year or two ago but now if you have an MCP server for docs and your project and goals outlined properly it’s pretty good.
You shouldn’t think of “AI” as intelligent and ask it to do something tricky. The boring stuff that’s mostly just typing, that’s what you get the LLMs to do. “Make a DTO for this table <paste>” “Interface for this JSON <paste>”
I just have a bunch of conversations going where I can paste stuff into and it will generate basic code. Then it’s just connecting things up, but that’s the fun part anyway.
Most ides do the boring stuff with templates and code generation for like a decade so that’s not so helpful to me either but if it works for you.
Yeah but I find code generation stuff I’ve used in the past takes a significant amount of configuration, and will often generate a bunch of code I don’t want it to, and not in the way I want it. Many times it’s more trouble than it’s worth. Having an LLM do it means I don’t have to deal with configuring anything and it’s generating code for the specific thing I want it to so I can quickly validate it did things right and make any additions I want because it’s only generating the thing I’m working on that moment. Also it’s the same tool for the various languages I’m using so that adds more convenience.
Yeah if you have your IDE setup with tools to analyze the datasource and does what you want it to do, that may work better for you. But with the number of DBs I deal with, I’d be spending more time setting up code generation than actually writing code.
I have asked questions, had conversations for company and generated images for role playing with AI.
I’ve been happy with it, so far.
That’s kind of outside the software development discussion but glad you’re enjoying it.
As a developer
- I can jot down a bunch of notes and have ai turn it into a reasonable presentation or documentation or proposal
- zoom has an ai agent which is pretty good about summarizing a meeting. It usually just needs minor corrections and you can send it out much faster than taking notes
- for coding I mostly use ai like autocomplete. Sometimes it’s able to autocomplete entire code blocks
- for something new I might have ai generate a class or something, and use it as a first draft where I then make it work
I’ve had success with:
- dumping email threads into it to generate user stories,
- generating requirements documentation templates so that everyone has to fill out the exact details needed to make the project a success
- generating quick one-off scripts
- suggesting a consistent way to refactor a block of code (I’m not crazy enough to let it actually do all the refactoring)
- summarize the work done for a slide deck and generate appropriate infographics
Essentially, all the stuff that I’d need to review anyway, but use of AI means that actually generating the content can be done in a consistent manner that I don’t have to think about. I don’t let it create anything, just transform things in blocks that I can quickly review for correctness and appropriateness. Kind of like getting a junior programmer to do something for me.
Experienced software developer, here. “AI” is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I’m not worried about clobbering existing code) and I don’t want to do it by hand, it saves me time.
And… that’s about it. It sucks at code review, and will break shit in your repo if you let it.
Same. I also like it for basic research and helping with syntax for obscure SQL queries, but coding hasn’t worked very well. One of my less technical coworkers tried to vibe code something and it didn’t work well. Maybe it would do okay on something routine, but generally speaking it would probably be better to use a library for that anyway.
I actively hate the term “vibe coding.” The fact is, while using an LLM for certain tasks is helpful, trying to build out an entire, production-ready application just by prompts is a huge waste of time and is guaranteed to produce garbage code.
At some point, people like your coworker are going to have to look at the code and work on it, and if they don’t know what they’re doing, they’ll fail.
I commend them for giving it a shot, but I also commend them for recognizing it wasn’t working.
Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.
In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.
Like I said, I do find it useful at times. But not only shouldn’t it replace coders, it fundamentally can’t. At least, not without a fundamental rearchitecturing of how they work.
The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.
On top of that, spoken and written language are very imprecise, and there’s no way for an LLM to derive what you really wanted from context clues such as your tone of voice.
Take the phrase “fruit flies like a banana.” Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?
It’s a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we’ve got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don’t.
Everyone on Lemmy is a software developer.
Sometimes I get an LLM to review a patch series before I send it as a quick once over. I would estimate about 50% of the suggestions are useful and about 10% are based on “misunderstanding”. Last week it was suggesting a spelling fix I’d already made because it didn’t understand the - in the diff meant I’d changed the line already.
Exactly what you would expect from a junior engineer.
Let them run unsupervised and you have a mess to clean up. Guide them with context and you’ve got a second set of capable hands.
Something something craftsmen don’t blame their tools
AI tools are way less useful than a junior engineer, and they aren’t an investment that turns into a senior engineer either.
Yeah but a Claude/Cursor/whatever subscription costs $20/month and a junior engineer costs real money. Are the tools 400 times less useful than a junior engineer? I’m not so sure…
The point is that comparing AI tools to junior engineers is ridiculous in the first place. It is simply marketing.
Even at $100/month you’re comparing to a > $10k/month junior. 1% of the cost for certainly > 1% functionality of a junior.
You can see why companies are tripping over themselves to push this new modality.
I was just ballparking the salary. Say it’s only 100x. Does the argument change? It’s a lot more money to pay for a real person.
Wasn’t it clear that our comments are in agreement?
It wasn’t, but now it is.
Is “way less useful” something you can cite with a source, or is that just feelings?
It is based on my experience, which I trust immeasurably more than rigged “studies” done by the big LLM companies with clear conflict of interest.
Okay, but like-
You could just be lying.
You could even be a chatbot, programmed to hype AI in comments sections.
So I’m going to trust studies, not some anonymous commenter on the internet who says “trust me bro!”
Huh? I’m definitely not hyping AI. If anything it would be the opposite. We’re also literally in the comment section for an a study about AI productivity which is the first remotely reputable study I’ve even seen. The rest have been rigged marketing stunts. As far as judging my opinion about the productivity of AI against junior developers against studies, why don’t you bring me one that isn’t “we made an artificial test then directly trained our LLM on the questions so it will look good for investors”? I’ll wait.
Understood, thanks for being honest
The difference being junior engineers eventually grow up into senior engineers.
Does every junior eventually achieve becoming a senior?
No, but that’s the only way you get senior engineers!
I agree, but the goal of CEOs is “line go up,” not make our eng team stronger (usually)
Capitalism, shortsighted? Say it ain’t so!
Exactly what you would expect from a junior engineer.
Except junior engineers become seniors. If you don’t understand this … are you HR?
They might become seniors for 99% more investment. Or they crash out as “not a great fit” which happens too. Juniors aren’t just “senior seeds” to be planted
no shit. ai will hallucinate shit I’ll hit tab by accident and spend time undoing that or it’ll hijack tab on new lines inconsistently
Fun how the article concludes that AI tools are still good anyway, actually.
This AI hype is a sickness
Writing code is the easiest part of my job. Why are you taking that away?
For some of us that’s more useful. I’m currently playing a DevSecOps role and one of the defining characteristics is I need to know all the tools. On Friday, I was writing some Java modules, then some groovy glue, then spent the after writing a Python utility. While im reasonably good about jumping among languages and tools, those context switches are expensive. I definitely want ai help with that.
That being said, ai is just a step up from search or autocomplete, it’s not magical. I’ve had the most luck with it generating unit tests since they tend to be simple and repetitive (also a major place for the juniors to screw up: ai doesn’t know whether the slop it’s pumping out is useful. You do need to guide it and understand it, and you really need to cull the dreck)
I’ve used cursor quite a bit recently in large part because it’s an organization wide push at my employer, so I’ve taken the opportunity to experiment.
My best analogy is that it’s like micro managing a hyper productive junior developer that somehow already “knows” how to do stuff in most languages and frameworks, but also completely lacks common sense, a concept of good practices, or a big picture view of what’s being accomplished. Which means a ton of course correction. I even had it spit out code attempting to hardcode credentials.
I can accomplish some things “faster” with it, but mostly in comparison to my professional reality: I rarely have the contiguous chunks of time I’d need to dedicate to properly ingest and do something entirely new to me. I save a significant amount of the onboarding, but lose a bunch of time navigating to a reasonable solution. Critically that navigation is more “interrupt” tolerant, and I get a lot of interrupts.
That said, this year’s crop of interns at work seem to be thin wrappers on top of LLMs and I worry about the future of critical thinking for society at large.
That said, this year’s crop of interns at work seem to be thin wrappers on top of LLMs and I worry about the future of critical thinking for society at large.
This is the must frustrating problem I have. With a few exceptions, LLM use seems to be inversely proportional to skill level, and having someone tell me “chatgpt said ___” when asking me for help because clearly chatgpt is not doing it for their problem makes me want to just hang up.
Just the other day I wasted 3 min trying to get AI to sort 8 lines alphabetically.
I had to sort over 100 lines of data hardcoded into source (don’t ask) and it was a quick function in my IDE.
I feel like “sort” is common enough everywhere that AI should quickly identify the right Google results, and it shouldn’t take 3 min
By having it write a quick function to do so or to sort them alphabetically within the chat? Because I’ve used GPT to write boilerplate and/or basic functions for random tasks like this numerous times without issue. But expecting it to sort a block of text for you is not what LLMs are really built for.
That being said, I agree that expecting AI to write complex and/or long-form code is a fool’s hope. It’s good for basic tasks to save time and that’s about it.
The tool I use can rewrite code given basic commands. Other times I might say, “Write a comment above each line” or “Propose better names for these variables” and it does a decent job.
I’ve actually had a fair bit of success getting GitHub Copilot do things like this. Heck I even got it to do some matrix transformations of vectors in a JSON file.
I wouldn’t mention this to anyone at work. It makes you sound clueless
My boss insists I use it and I insist on telling him when it can’t do the simplest things.
It sounds like you’ve got it all figured out. Best of luck to you