The LLM peddlers seem to be going for that exact result. That’s why they’re calling it “AI”. Why is this surprising that non-technical people are falling for it?
That’s why they’re calling it “AI”.
That’s not why. They’re calling it AI because it is AI. AI doesn’t mean sapient or conscious.
Edit: look at this diagram if you’re still unsure:
In the general population it does. Most people are not using an academic definition of AI, they are using a definition formed from popular science fiction.
You have that backwards. People are using the colloquial definition of AI.
“Intelligence” is defined by a group of things like pattern recognition, ability to use tools, problem solving, etc. If one of those definitions are met then the thing in question can be said to have intelligence.
A flat worm has intelligence, just very little of it. An object detection model has intelligence (pattern recognition) just not a lot of it. An LLM has more intelligence than a basic object detection model, but still far less than a human.
Yes, that’s the point. You’d think they could have, at least, looked into a dictionary at some point in the last 2 years. But nope, everyone else is wrong. A round of applause for the paragons of human intelligence.
What is this nonsense Euler diagram? Emotion can intersect with consciousness, but emotion is also a subset of consciousness but emotion also never contains emotion? Intelligence does overlap at all with sentience, sapience, or emotion? Intelligence isn’t related at all to thought, knowledge, or judgement?
Did AI generate this?
https://www.mdpi.com/2079-8954/10/6/254
What is this nonsense Euler diagram?
Science.
Did AI generate this?
Scientists did.
Not everything you see in a paper is automatically science, and not every person involved is a scientist.
That picture is a diagram, not science. It was made by a writer, specifically a columnist for Medium.com, not a scientist. It was cited by a professor who, by looking at his bio, was probably not a scientist. You would know this if you followed the citation trail of the article you posted.
You’re citing an image from a pop culture blog and are calling it science, which suggests you don’t actually know what you’re posting, you just found some diagram that you thought looked good despite some pretty glaring flaws and are repeatedly posting it as if it’s gospel.
You’re citing an image from a pop culture blog and are calling it science
I was being deliberately facetious. You can find similar diagrams from various studies. Granted that many of them are looking at modern AI models to ask the question about intelligence, reasoning, etc. but it highlights that it’s still an open question. There’s no definitive ground truth about what exactly is “intelligence”, but most experts on the subject would largely agree with the gist of the diagram with maybe a few notes and adjustments of their own.
To be clear, I’ve worked in the field of AI for almost a decade and have a fairly in-depth perspective on the subject. Ultimately the word “intelligence” is completely accurate.
I think an alarming number of Gen Z internet folks find it funny to skew the results of anonymous surveys.
Yeah, what is it with GenZ? Millenials would never skew the results of anonymous surveys
Right? Just insane to think that Millenials would do that. Now let me read through this list of Time Magazines top 100 most influential people of 2009.
This is an angle I’ve never considered before, with regards to a future dystopia with a corrupt AI running the show. AI might never advance beyond what it is in 2025, but because people believe it’s a supergodbrain, we start putting way too much faith in its flawed output, and it’s our own credulity that dismantles civilisation rather than a runaway LLM with designs of its own. Misinformation unwittingly codified and sanctified by ourselves via ChatGeppetto.
The call is coming from inside the
housemechanical Turk!They call it hallucinations like it’s a cute brain fart, and “Agentic” means they’re using the output of one to be the input of another, which has access to things and can make decisions and actually fuck things up. It’s a complete fucking shit show. But humans are expensive so replacing them makes line go up.
That’s the intended effect. People with real power think this way: “where it does work, it’ll work and not bother us with too much initiative and change, and where it doesn’t work, we know exactly what to do, so everything is covered”. Checks and balances and feedbacks and overrides and fallbacks be damned.
Humans are apes. When an ape gets to rule an empire, it remains an ape and the power kills its ability to judge.
I wasn’t aware the generation of CEOs and politicians was called “Gen Z”.
We have to make the biggest return on our investments, fr fr
The article targets its study on Gen Z but… yeah, the elderly aren’t exactly winners here, either.
deleted by creator
They also are the dumbest generation with a COVID education handicap and the least technological literacy in terms of mechanics comprehension. They have grown up with technology that is refined enough to not need to learn troubleshooting skills past “reboot it”.
How they don’t understand that a LLM can’t be conscious is not surprising. LLMs are a neat trick, but far from anything close to consciousness or intelligence.
Have fun with your back problems!
It will happen to YOU!
I wish philosophy was taught a bit more seriously.
An exploration on the philosophical concepts of simulacra and eidolons would probably change the way a lot of people view LLMs and other generative AI.
Same generation who takes astrology seriously, I’m shocked
Taking astrology seripusly isn’t a gen z only thing, where hve you been?
to be honest they probably wish it was conscious because it has more of a conscience than conservatives and capitalists
I’ve been hearing a lot about gen z using them for therapists, and I find that really sad and alarming.
AI is the ultimate societal yes man. It just parrots back stuff from our digital bubble because it’s trained on that bubble.
Chatgpt disagrees that it’s a yes-man:
To a certain extent, AI is like a societal “yes man.” It reflects and amplifies patterns it’s seen in its training data, which largely comes from the internet—a giant digital mirror of human beliefs, biases, conversations, and cultures. So if a bubble dominates online, AI tends to learn from that bubble.
But it’s not just parroting. Good AI models can analyze, synthesize, and even challenge or contrast ideas, depending on how they’re used and how they’re prompted. The danger is when people treat AI like an oracle, without realizing it’s built on feedback loops of existing human knowledge—flawed, biased, or brilliant as that may be.
deleted by creator
An Alarming Number of Anyone Believes Fortune Cookies
Just … accept it, superstition is in human nature. When you take religion away from them, they need something, it’ll either be racism/fascism, or expanding conscience via drugs, or belief in UFOs, or communism at least, but they need something.
The last good one was the digital revolution, globalization, world wide web, all that, no more wars (except for some brown terrorists, but the rest is fine), everyone is free and civilized now (except for those with P*tin as president and other such types, but it’s just an imperfect democracy don’t you worry), SG-1 series.
Anything changing our lives should have an intentionally designed religious component, or humans will improvise that where they shouldn’t.
The batshit insane part of that is they could just make easy canned answers for thank yous, but nope…IT’S THE USER’S FAULT!
edit: To the mass downvoting prick who is too cowardly to comment, whats it like to be best friends with a calculator?
One would think if they’re as fucking smart as they believe they are they could figger a way around it, eh??? 🤣
I checked the source and I can’t find their full report or even their methodology.
If they mistake those electronic parrots for conscious intelligencies, they probably won’t be the best judges for rating such things.