As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?
No. It hallucinates all the time.
Yes, but search engines will serve you LLM generated slop instead of search results, and sites like Stack Overflow will die due to lack of visitors, so the internet will become a reddit-like useless LLM ridden hellscape completely devoid of any human users, and we’ll have to go back to our grandparents’ old dusty paper encyclopedias.
Eventually, in a decade or two, once the bubble has burst and google, meta, and all those bastards have starved each other to death, we might be able to start rebuilding a new internet, probably reinventing usenet over ad-hoc decentralised wifi networks, but we won’t get far, we’ll die in the global warming wars before we get it to any significant size.
At least some bastards will have made billions out of the scam, though, so there’s that, I suppose. 🤷♂️
Sure does, but somehow many of the answers still work well enough. In many contexts, the hallucinations are only speed bumps, not show stopping disasters.
It told people to put glue in their pizza to make the dough chewy. It’s pretty fucking awful.
Copilot wrote me some code that totally does not work. I pointed out the bug and told it exactly how to fix the problem. It said it fixed it and gave me the exact same buggy trash code again. Yes, it can be pretty awful. LLMs fail in some totally absurd and unexpected ways. On the other hand, it knows the documentation of every function, but somehow still fails at some trivial tasks. It’s just bizarre.
It does this because it inherently hallucinates. It’s just an analytical letter guesser that sounds human because it amalgamates and predicts the next word. It’s just gotten so much input that it can sound human. But it has no concept of right and wrong. Even when you tell it that it’s wrong. It doesn’t understand anything. That’s why it sucks. And that’s why it will always suck. It will not replace search because it makes shit up. I use it for coding here and there as well and it’s just making up functions that don’t exist or attributes functions to packages that aren’t real.
No, because I ignore whatever AI slop comes up when I search for something
I have never found it to be anything other than useless. I will actively search for a qualified answer to my questions, rather than being lazy and relying on the first thing that pops up
What I’m worried about are traditional indexers being intentionally nerfed, discontinued, or left unmaintained at best. I’ve often wondered what it would take to self host a personal indexer. I remember a time when search giant Alta Vista had a full text index of the then known internet on their DEC Alpha server(s).
Alta Vista was great!
Now I’m definitely showing my age…
The problem lies with the way the “modern” internet works by loading everything dynamically. Static pages to index are becoming more rare. Also a lot of information is being “lost” in proprietary systems like discord. Those also can’t be indexed (easily)
To be fair, at the current state search engines work LLMs might not be the worst idea.
I’m looking for the 7800x3d, not 3D shooters, not the 1234x3d, no not the pentium 4, not the 4700rtx. It takes more and more effort to search something, and the first pages show every piece of crap I’m not interested in.
Google made the huge mistake of placing the CEO of adds in charge of search.
And now it fucking sucks.
I think you will be in a loud minority, people don’t like additional work.
Probably
But I don’t see it as work
“Work” is unfucking a situation that I created by being lazy in the first place rather than doing something properly
I’m probably showing my age though…
Even “let me Google that for you” was popular only some years ago. Yes, people are lazy, unthinking hedonists most of the time. In the absence of some sort of strict moral basis, society degenerates because only the tiniest minority will even think about things to try to establish some personal rules.
I still use https://lmgtfy.com/ as a public shame for anyone that can’t be arsed to put in a bit of effort to find something.
You only ignore AI slop when you recognize it as such.
I specifically ignore the google “AI summary”
I also tend to go through the results until I get something from a qualified source.
I’m sure I’m getting some of the aforementioned AI slop, but I would wager that I’m getting better results than the people I know who specifically look for an AI summary.
And where does LLM take the answer? Forum and socmed. And if LLM don’t have the actual answer they blabbering like a redditor, and if someone can’t get an accurate answer they start asking forum and socmed.
So no, LLM will not replace human interaction because LLM relies on human interaction. LLM cannot diagnose your car without human first diagnose your car.
And if LLM don’t have the actual answer they blabbering like a redditor, and if someone can’t get an accurate answer they start asking forum and socmed.
LLM’s are completely incapable of giving a correct answer, except by random chance.
They’re extremely good at giving what looks like a correct answer, and convincing their users that it’s correct, though.
When LLMs are the only option, people won’t go elsewhere to look for answers, regardless of how nonsensical or incorrect they are, because the answers will look correct, and we’ll have no way of checking them for correctness.
People will get hurt, of course. And die. (But we won’t hear about it, because the LLM’s won’t talk about it.) And civilization will enter a truly dark age of mindless ignorance.
But that doesn’t matter, because the company will have already got their money, and the line will go up.
They’re extremely good at giving what looks like a correct answer,
Exactly. Sometimes the thing that looks right IS right, and sometimes it’s not. The stochastic parrot doesn’t know the difference
The problem is that the LLMs have stolen all that information, repackaged it in ways that are subtly (or blatantly) false or misleading, and then hidden the real information behind a wall of search results that are entire domains of ai trash. It’s very difficult to even locate the original sources or forums anymore.
I’ve even tried to use Gemini to find a particular YouTube video that matches specific criteria. Unsurprisingly, it gave me a bunch of videos, none of which were even close to what I’m looking for.
That’s true. There could be a balance of sorts. Who knows. If LLMs become increasingly useful, people start using them more. As they loose training data, quality goes down, and people shift back to forums etc. Could work that way too.
There have been enough times that I googled something, saw the AI answer at the top, and repeated it like gospel. Only to look like a buffoon when we realize the AI was completely wrong.
Now I look right past the AI answer and read the sources it’s pulling from. Then I don’t have to worry about anything misinterpreting the answer.
True, but soon the sources will be AI generated too, in a big GIGO loop.
That’s exactly what I’m worried about happening. What If one day there are hardly any sources left?
At this rate that day is not too distant, I’m affraid.
I was expecting either Huxley or Orwell to be right, not both.
Interestingly, there’s an Intelligence Squared episode that explores that very point. As usual, there’s a debate, voting and both sides had some pretty good arguments. I’m convinced that Orwell and Huxley were correct about certain things. Not the whole picture, but specific parts of it.
Agreed, if we look closely we can find some Bradbury and William Gibson elements in the lovely dystopia we’re currently enjoying.
Oh absolutely. Cyberpunk was meant to feel alien and revolting, but nowadays it is beginning to feel surprisingly familiar. Still revolting though, just like the real world.
LLMs are awesome in their knowledge until you start to hear its answers to stuff you already know and makes you wonder if anything was correct.
What they call hallucinations in other areas was called fabulations, to invent tales or stories.
I’m curious about what is the shortest acceptable answer for these things and if something close to “I don’t know” is even an option.
I get the feeling that LLMs are designed to please humans, so uncomfortable answers like “I don’t know” are out of the question.
- This thing is broken. How do I fix it?
- Don’t know. 🤷
- Seriously? I need an answer? Any ideas?
- Nope. You’re screwed. Best of luck to you. Figure it out. I believe in you. ❤️
My 70 year old boss and his 50 year old business partner just today generated a set of instructions for scanning to a thumb drive on a specific model of printer.
They obviously missed the “AI Generated” tag on the Google search and couldn’t figure out why the instructions cited the exact model but told them to press buttons and navigate menus that didn’t exist.
These are average people and they didn’t realize that they were even using ai much less how unreliable it can be.
I think there’s going to be a place for forums to discuss niche problems for as long as ai just means advanced LLM and not actual intelligence.
When diagnosing software related tech problems with proper instructions, there’s always the risk of finding outdated tips. You may be advised to press buttons that no longer exist in the version you’re currently using.
With hardware though, that’s unlikely to happen, as long as the model numbers match. However, when relying on AI generated instructions, anything is possible.
Not so simple with hardware also. Although less frequent, hardware also has variants, the nuances of which are easily missed by LLMs
Trouble is that ‘quick answers’ mean the LLM took no time to do a thorough search. Could be right or wrong - just by luck.
When you need the details to be verified by trustworthy sources, it’s still do-it-yourself time. If you -don’t- verify, and repeat a wrong answer to someone else, -you- are untrustworthy.
A couple months back I asked GPT a math question (about primes) and it gave me the -completely wrong- answer … ‘none’ … answered as if it had no doubt. It was -so- wrong it hadn’t even tried. I pointed it to the right answer (‘an infinite number’) and to the proof. It then verified that.
A couple of days ago, I asked it the same question … and it was completely wrong again. It hadn’t learned a thing. After some conversation, it told me it couldn’t learn. I’d already figured that out.
Trouble is that ‘quick answers’ mean the LLM took no time to do a thorough search.
LLMs don’t “search”. They essentially provide weighted parrot-answers based on what they’ve seen elsewhere.
If you tell an LLM that the sky is red, they will tell you the sky is red. If you tell them your eyes are the colour of the sky, they will repeat that your eyes are red. LLMs aren’t capable of checking if something is true.
Theyre just really fast parrots with a big vocabulary. And every time they squawk, it burns a tree.
Math problems are a unique challenge for LLMs, often resulting in bizarre mistakes. While an LLM can look up formulas and constants, it usually struggles with applying them correctly. Sort of, like counting the hours in a week, it says it calculates 7*24, which looks good, but somehow the answer is still 10 🤯. Like, WTF? How did that happen? In reality, that specific problem might not be that hard, but the same phenomenon can still be seen in more complicated problems. I could give some other examples too, but this post is long enough as it is.
For reliable results in math-related queries, I find it best to ask the LLM for formulas and values, then perform the calculations myself. The LLM can typically look up information reasonably accurately but will mess up the application. Just use the right tool for the right job, and you’ll be ok.
Is your abuse of the ellipsis and dashes supposed to be ironic? Isn’t that a LLM tell?
I’m not even sure what the (‘phrase’) construct is even meant to imply, but it’s wild. Your abuse of punctuation in general feels like a machine trying to convince us it’s human or a machine transcribing a human’s stream of consciousness.
deleted by creator
If the tech matures enough , potentially !
Not wrong about LLMs (currently )? bad with tech support , but so are search engines lol
to an extent, yes, but not completely