A guy is driving around the back woods of Montana and he sees a sign in front of a broken down shanty-style house: ‘Talking Dog For Sale.’
He rings the bell and the owner appears and tells him the dog is in the backyard.
The guy goes into the backyard and sees a nice looking Labrador Retriever sitting there.
“You talk?” he asks.
“Yep” the Lab replies.
After the guy recovers from the shock of hearing a dog talk, he says, “So, what’s your story?”
The Lab looks up and says, “Well, I discovered that I could talk when I was pretty young. I wanted to help the government, so I told the CIA. In no time at all they had me jetting from country to country, sitting in rooms with spies and world leaders, because no one figured a dog would be eavesdropping, I was one of their most valuable spies for eight years running… but the jetting around really tired me out, and I knew I wasn’t getting any younger so I decided to settle down. I signed up for a job at the airport to do some undercover security, wandering near suspicious characters and listening in. I uncovered some incredible dealings and was awarded a batch of medals. I got married, had a mess of puppies, and now I’m just retired.”
The guy is amazed. He goes back in and asks the owner what he wants for the dog.
“Ten dollars” the guy says.
“Ten dollars? This dog is amazing! Why on Earth are you selling him so cheap?”
“Because he’s a liar. He’s never been out of the yard.”
There is an alternative reality out there where LLMs were never marketed as AI and were marketed as random generator.
In that world, tech savvy people would embrace this tech instead of having to constantly educate people that it is in fact not intelligence.
I’ve already had more than one conversation where people quote AI as if it were a source, like quoting google as a source. When I showed them how it can sometimes lie and explain it’s not a primary source for anything I just get that blank stare like I have two heads.
I use ai like that except im not using the same shit everyone else is on. I use a dolphin fine tuned model with tool use hooked up to an embedder and searxng. Every claim it makes is sourced.
This is a bad example… If I ask a friend "is strawberry spelled with one or two r’s"they would think I’m asking about the last part of the word.
The question seems to be specifically made to trip up LLMs. I’ve never heard anyone ask how many of a certain letter is in a word. I’ve heard people ask how you spell a word and if it’s with one or two of a specific letter though.
If you think of LLMs as something with actual intelligence you’re going to be very unimpressed… It’s just a model to predict the next word.
If you think of LLMs as something with actual intelligence you’re going to be very unimpressed… It’s just a model to predict the next word.
This is exactly the problem, though. They don’t have “intelligence” or any actual reasoning, yet they are constantly being used in situations that require reasoning.
Maybe if you focus on pro- or anti-AI sources, but if you talk to actual professionals or hobbyists solving actual problems, you’ll see very different applications. If you go into it looking for problems, you’ll find them, likewise if you go into it for use cases, you’ll find them.
If you think of LLMs as something with actual intelligence you’re going to be very unimpressed
Artificial sugar is still sugar.
Artificial intelligence implies there is intelligence in some shape or form.
Artificial sugar is still sugar.
Because it contains sucrose, fructose or glucose? Because it metabolises the same and matches the glycemic index of sugar?
Because those are all wrong. What’s your criteria?
Because you’re using it wrong. It’s good for generative text and chains of thought, not symbolic calculations including math or linguistics
Because you’re using it wrong.
No, I think you mean to say it’s because you’re using it for the wrong use case.
Well this tool has been marketed as if it would handle such use cases.
I don’t think I’ve actually seen any AI marketing that was honest about what it can do.
I personally think image recognition is the best use case as it pretty much does what it promises.
Give me an example of how you use it.
Writing customer/company-wide emails is a good example. “Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online”
Dumbing down technical information “word this so a non-technical person can understand: our DHCP scope filled up and there were no more addresses available for Site A, which caused the temporary outage for some users”
Another is feeding it an article and asking for a summary, https://hackingne.ws/ does that for its Bsky posts.
Coding is another good example, “write me a Python script that moves all files in /mydir to /newdir”
Asking for it to summarize a theory or protocol, “explain to me why RIP was replaced with RIPv2, and what problems people have had since with RIPv2”
it’s not good for summaries. often gets important bits wrong, like embedded instructions that can’t be summarized.
My experience has been very different, I do have to sometimes add to what it summarized though. The Bsky account mentioned is a good example, most of the posts are very well summarized, but every now and then there will be one that isn’t as accurate.
Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online
How does this work in practice? I suspect you’re just going to get an email that takes longer for everyone to read, and doesn’t give any more information (or worse, gives incorrect information). Your prompt seems like what you should be sending in the email.
If the model (or context?) was good enough to actually add useful, accurate information, then maybe that would be different.
I think we’ll get to the point really quickly where a nice concise message like in your prompt will be appreciated more than the bloated, normalised version, which people will find insulting.
Yeah, normally my “Make this sound better” or “summarize this for me” is a longer wall of text that I want to simplify, I was trying to keep my examples short. Talking to non-technical people about a technical issue is not the easiest for me, AI has helped me dumb it down when sending an email, and helps correct my shitty grammar at times.
As for accuracy, you review what it gives you, you don’t just copy and send it without review. Also you will have to tweak some pieces that it gives out where it doesn’t make the most sense, such as if it uses wording you wouldn’t typically use. It is fairly accurate though in my use-cases.
Hallucinations are a thing, so validating what it spits out is definitely needed.
Another example: if you feel your email is too stern or gives the wrong tone, I’ve used it for that as well. “Make this sound more relaxed: well maybe if you didn’t turn off the fucking server we wouldn’t of had this outage!” (Just a silly example)
As for accuracy, you review what it gives you, you don’t just copy and send it without review.
Yeah, I don’t get why so many people seem to not get that.
It’s like people who were against Intellisense in IDEs because “What if it suggests the wrong function?”…you still need to know what the functions do. If you find something you’re unfamiliar with, you check the documentation. You don’t just blindly accept it as truth.
Just because it can’t replace a person’s job doesn’t mean it’s worthless as a tool.
Yeah, I don’t get why so many people seem to not get that.
The disconnect is that those people use their tools differently, they want to rely on the output, not use it as a starting point.
I’m one of those people, reviewing AI slop is much harder for me than just summarizing it myself.
I find function name suggestions useful cause it’s a lookup tool, it’s not the same as a summary tool that doesn’t help me find a needle in a haystack, it just finds me a needle when I have access to many needles already, I want the good/best needle, and it can’t do that.
The issue is that AI is being invested in as if it can replace jobs. That’s not an issue for anyone who wants to use it as a spellchecker, but it is an issue for the economy, for society, and for the planet, because billions of dollars of computer hardware are being built and run on the assumption that trillions of dollars of payoff will be generated.
And correcting someone’s tone in an email is not, and will never be, a trillion dollar industry.
That’s a very different problem than the one in the OP
I think these are actually valid examples, albeit ones that come with a really big caveat; you’re using AI in place of a skill that you really should be learning for yourself. As an autistic IT person, I get the struggle of communicating with non-technical and neurotypical people, especially clients who you have to be extra careful with. But the reality is, you can’t always do all your communication by email. If you always rely on the AI to correct your tone or simplify your language, you’re choosing not to build an essential skill that is every bit as important to doing your job well as it is to know how to correctly configure an ACL on a Cisco managed switch.
That said, I can also see how relying on the AI at first can be a helpful learning tool as you build those skills. There’s certainly an argument that by using tools, but paying attention to the output of those tools, you build those skills for yourself. Learning by example works. I think used in that way, there’s potentially real value there.
Which is kind of the broader story with Gen AI overall. It’s not that it can never be useful; it’s that, at best, it can only ever aspire to “useful.” No one, yet, has demonstrated any ability to make AI “essential” and the idea that we should be investing hundreds of billions of dollars into a technology that is, on its best days, mildly useful, is sheer fucking lunacy.
Noted, I’ll be giving that a proper read after work. Thank you.
If you always rely on the AI to correct your tone or simplify your language, you’re choosing not to build an essential skill that is every bit as important to doing your job well as it is to know how to correctly configure an ACL on a Cisco managed switch.
This is such a good example of how it AI/LLMs/whatever are being used as a crutch that is far more impactful than using a spellchecker. A spell checker catches typos or helps with unfamiliar words, but doesn’t replace the underlying skill of communicating to your audience.
It works well. For example, we had a work exercise where we had to write a press release based on an example, then write a Shark Tank pitch to promote the product we came up with in the release.
I gave AI the link to the example and a brief description of our product, and it spit out an almost perfect press release. I only had to tweak a few words because there were specific requirements I didn’t feed the AI.
Then I told it to take the press release and write the pitch based on it.
Again, very nearly perfect with only having to change the wording in one spot.
The dumbed down text is basically as long as the prompt. Plus you have to double check it to make sure it didn’t have outrage instead of outage just like if you wrote it yourself.
How do you know the answer on why RIP was replaced with RIPv2 is accurate and not just a load of bullshit like putting glue on pizza?
Are you really saving time?
Yes, I’m saving time. As I mentioned in my other comment:
Yeah, normally my “Make this sound better” or “summarize this for me” is a longer wall of text that I want to simplify, I was trying to keep my examples short.
And
and helps correct my shitty grammar at times.
And
Hallucinations are a thing, so validating what it spits out is definitely needed.
How do you validate the accuracy of what it spits out?
Why don’t you skip the AI and just use the thing you use to validate the AI output?
Most of what I’m asking it are things I have a general idea of, and AI has the capability of making short explanations of complex things. So typically it’s easy to spot a hallucination, but the pieces that I don’t already know are easy to Google to verify.
Basically I can get a shorter response to get the same outcome, and validate those small pieces which saves a lot of time (I no longer have to read a 100 page white paper, instead a few paragraphs and then verify small bits)
Dumbed down doesn’t mean shorter.
If the amount of time it takes to create the prompt is the same as it would have taken to write the dumbed down text, then the only time you saved was not learning how to write dumbed down text. Plus you need to know what dumbed down text should look like to know if the output is dumbed down but still accurate.
I mean, I would argue that the answer in the OP is a good one. No human asking that question honestly wants to know the sum total of Rs in the word, they either want to know how many in “berry” or they’re trying to trip up the model.
Here’s a bit of code that’s supposed to do stuff. I got this error message. Any ideas what could cause this error and how to fix it? Also, add this new feature to the code.
Works reasonably well as long as you have some idea how to write the code yourself. GPT can do it in a few seconds, debugging it would take like 5-10 minutes, but that’s still faster than my best. Besides, GPT is also fairly fluent in many functions I have never used before. My approach would be clunky and convoluted, while the code generated by GPT is a lot shorter.
If you’re well familiar with the code you’ve working on, GPT code will be convoluted by comparison. If so, you can ask GPT for the rough alpha version, and you can do the debugging and refining in a few minutes.
That makes sense as long as you’re not writing code that needs to know how to do something as complex as …checks original post… count.
One thing which I find useful is to be able to turn installation/setup instructions into ansible roles and tasks. If you’re unfamiliar, ansible is a tool for automated configuration for large scale server infrastructures. In my case I only manage two servers but it is useful to parse instructions and convert them to ansible, helping me learn and understand ansible at the same time.
Here is an example of instructions which I find interesting: how to setup docker for alpine Linux: https://wiki.alpinelinux.org/wiki/Docker
Results are actually quite good even for smaller 14B self-hosted models like the distilled versions of DeepSeek, though I’m sure there are other usable models too.
To assist you in programming (both to execute and learn) I find it helpful too.
I would not rely on it for factual information, but usually it does a decent job at pointing in the right direction. Another use i have is helpint with spell-checking in a foreign language.
I think I have seen this exact post word for word fifty times in the last year.
Has the number of "r"s changed over that time?
Yes
y do you ask?
Just playing, friend.
Same, i was making a pun
Oh, I see! Apologies.
No apologies needed. Enjoy your day and keep the good vibes up!
This is literally just a tokenization artifact. If I asked you how many r’s are in /0x5273/0x7183 you’d be confused too.
It’s predictive text on speed. The LLMs currently in vogue hardly qualify as A.I. tbh…
Still, it’s kinda insane how two years ago we didn’t imagine we would be instructing programs like “be helpful but avoid sensitive topics”.
That was definitely a big step in AI.
It’s like someone who has no formal education but has a high level of confidence and eavesdrops on a lot of random conversations.
I know right? It’s not a fruit it’s a vegetable!
From a linguistic perspective, this is why I am impressed by (or at least, astonished by) LLMs!
The terrifying thing is everyone criticising the LLM as being poor, however it excelled at the task.
The question asked was how many R in strawbery and it answered. 2.
It also detected the typo and offered the correct spelling.
What’s the issue I’m missing?
Uh oh, you’ve blown your cover, robot sir.
deleted by creator
It is wrong. Strawberry has 3 r’s
deleted by creator
Uh, no, that is not common parlance. If any human tells you that strawberry has two r’s, they are also wrong.
there are two 'r’s in ‘strawbery’