cross-posted from: https://infosec.pub/post/24994013
CJR study shows AI search services misinform users and ignore publisher exclusion requests.
deleted by creator
Move fast and break things, brah!
Training AI with internet content was always going to fail, as at least 60% of users online are trolls. It’s even dumber than expecting you can have a child from anal sex.
Because of what you just wrote some dumb ass is going to try to have a child through anal sex after doing a google search.
They’re not joking about a hypothetical. It was a real thing that happened.
they’ve been having sex the wrong way
that’s subjective
There’s no way this isn’t bullshit. Please let this be bullshit…
I’m gonna go ahead and try without a Google search.
I believe in you, please name your child after me if it works out.
Know that If it doesn’t work, I’m not giving up.
I believe in you, if you end up having twins please name them after this instance
There was that one time when an AI gave a pizza recipe including gluing the cheese down with Elmer’s glue, because that was suggested as a joke on Reddit once.
There will never be such a thing as a useful LLM.
My level of shitposting has increased dramatically ever since I learned that I’m not just trolling the person I replied to, but future generations to come. You gotta have a legacy you’re proud of before you kick the bucket, ya know what I mean?
Society grows great when old men plant shitposts whose shade they know they shall never sit in.
where do you think lawyers come from?
but you can, it’s about as likely as having one from a thigh-job but is technically not impossible.
In the late 90s and early 2000s, internet search engines were designed to actually find relevant things … it’s what made Google famous
Since the 2010s, internet search engines have all been about monetizing, optimizing, directing, misdirecting, manipulating searches in order to drive users to the highest paying companies or businesses, groups or individuals that best knew how to use Search Engine Optimization. For the past 20 years, we’ve created an internet based on how we can manipulate everyone and everything in order to make someone money. The internet is no longer designed to freely and openly share information … it’s now just a wasteland of misinformation, disinformation, nonsense and manipulation because we are all trying to make money off one another in some way.
AI is just making all those problems bigger, faster and more chaotic. It’s trying to make money for someone but it doesn’t know how yet … but they sure are working on trying to figure it out.
Not just the search engines, but the websites themselves as well. Gaming the search engines is now an entire profitable industry, not just people putting links to their friends’ websites at the bottom of their webpage, or making a webring.
It’s just been a race to the bottom. The search engines get worse, as do the websites, and the whole thing is exacerbated by people today being able to churn out entire websites by the hundreds. Anyone trying to do things without playing the game simply ends up buried under layers of rubbish.
I’d say it’s a reflection of society.
Well, that’s less bad than 100% SEO optimized garbage with LLM generated spam stories around a few Amazon links.
Exactly. I would like to know the baseline.
Oh man, that’s too good. Thanks for sharing this. Now I kinda want to ask it about blue waffles, but I’m a little scared to.
I’m shocked!
Shocked I tell you!
Only 60%‽
Blows my mind that it’s so low.
I searched for pictures of Uranus recently. Google gave me pictures of Jupiter and then the ai description on top chided me telling me that what was shown were pictures of Jupiter, not Uranus. 20 years ago it would have just worked.
Stupid that we have to do this, but add
before:2022
and it filters out all the slop
The same technology Elon Musk wants to use to process your taxes everyone!
The same technology the billionaire class wants I use to eliminate payroll entirely
Well yeah, they get their information from the Internet. Garbage in. Garbage out.
From the article…
Surprisingly, premium paid versions of these AI search tools fared even worse in certain respects. Perplexity Pro ($20/month) and Grok 3’s premium service ($40/month) confidently delivered incorrect responses more often than their free counterparts.
Though these premium models correctly answered a higher number of prompts, their reluctance to decline uncertain responses drove higher overall error rates.
Fixing all the shit AI breaks is going to create a lot of jobs
Not enough
“Your father and I are for the jobs the
asteroidAI will create.”
It’s strongly dependent on how you use it. Personally, I started out as a skeptic but by now I’m quite won over by LLM-aided search. For example, I was recently looking for an academic that had published some result I could describe in rough terms, but whose name and affiliation I was drawing a blank on. Several regular web searches yielded nothing, but Deepseek’s web search gave the result first try.
(Though, Google’s own AI search is strangely bad compared to others, so I don’t use that.)
The flip side is that for a lot of routine info that I previously used Google to find, like getting a quick and basic recipe for apple pie crust, the normal search results are now enshittified by ad-optimized slop. So in many cases I find it better to use a non-web-search LLM instead. If it matters, I always have the option of verifying the LLM’s output with a manual search.
To me it seems the title is misleading as the research is very narrowly scoped. They provided news excerts to the LLMs and asked for the title, the author, the publication date, and the URL. Is this something people do? I would be interested if they used some real world examples.
60% of the time it works every time