An alarming number of people can’t read over a 6th grade level.
An alarming number of them believe that they are conscious too, when they show no signs of it.
I think an alarming number of Gen Z internet folks find it funny to skew the results of anonymous surveys.
Yeah, what is it with GenZ? Millenials would never skew the results of anonymous surveys
Right? Just insane to think that Millenials would do that. Now let me read through this list of Time Magazines top 100 most influential people of 2009.
I checked the source and I can’t find their full report or even their methodology.
I wish philosophy was taught a bit more seriously.
An exploration on the philosophical concepts of simulacra and eidolons would probably change the way a lot of people view LLMs and other generative AI.
Same generation who takes astrology seriously, I’m shocked
Taking astrology seripusly isn’t a gen z only thing, where hve you been?
The LLM peddlers seem to be going for that exact result. That’s why they’re calling it “AI”. Why is this surprising that non-technical people are falling for it?
That’s why they’re calling it “AI”.
That’s not why. They’re calling it AI because it is AI. AI doesn’t mean sapient or conscious.
Edit: look at this diagram if you’re still unsure:
In the general population it does. Most people are not using an academic definition of AI, they are using a definition formed from popular science fiction.
You have that backwards. People are using the colloquial definition of AI.
“Intelligence” is defined by a group of things like pattern recognition, ability to use tools, problem solving, etc. If one of those definitions are met then the thing in question can be said to have intelligence.
A flat worm has intelligence, just very little of it. An object detection model has intelligence (pattern recognition) just not a lot of it. An LLM has more intelligence than a basic object detection model, but still far less than a human.
Yes, that’s the point. You’d think they could have, at least, looked into a dictionary at some point in the last 2 years. But nope, everyone else is wrong. A round of applause for the paragons of human intelligence.
Lots of people lack critical thinking skills
This is an angle I’ve never considered before, with regards to a future dystopia with a corrupt AI running the show. AI might never advance beyond what it is in 2025, but because people believe it’s a supergodbrain, we start putting way too much faith in its flawed output, and it’s our own credulity that dismantles civilisation rather than a runaway LLM with designs of its own. Misinformation unwittingly codified and sanctified by ourselves via ChatGeppetto.
The call is coming from inside the
housemechanical Turk!They call it hallucinations like it’s a cute brain fart, and “Agentic” means they’re using the output of one to be the input of another, which has access to things and can make decisions and actually fuck things up. It’s a complete fucking shit show. But humans are expensive so replacing them makes line go up.
That’s the intended effect. People with real power think this way: “where it does work, it’ll work and not bother us with too much initiative and change, and where it doesn’t work, we know exactly what to do, so everything is covered”. Checks and balances and feedbacks and overrides and fallbacks be damned.
Humans are apes. When an ape gets to rule an empire, it remains an ape and the power kills its ability to judge.
I’ve been hearing a lot about gen z using them for therapists, and I find that really sad and alarming.
AI is the ultimate societal yes man. It just parrots back stuff from our digital bubble because it’s trained on that bubble.
Chatgpt disagrees that it’s a yes-man:
To a certain extent, AI is like a societal “yes man.” It reflects and amplifies patterns it’s seen in its training data, which largely comes from the internet—a giant digital mirror of human beliefs, biases, conversations, and cultures. So if a bubble dominates online, AI tends to learn from that bubble.
But it’s not just parroting. Good AI models can analyze, synthesize, and even challenge or contrast ideas, depending on how they’re used and how they’re prompted. The danger is when people treat AI like an oracle, without realizing it’s built on feedback loops of existing human knowledge—flawed, biased, or brilliant as that may be.
to be honest they probably wish it was conscious because it has more of a conscience than conservatives and capitalists
An alarming number of Hollywood screenwriters believe consciousness (sapience, self awareness, etc.) is a measurable thing or a switch we can flip.
At best consciousness is a sorites paradox. At worst, it doesn’t exist and while meat brains can engage in sophisticated cognitive processes, we’re still indistinguishable from p-zombies.
I think the latter is more likely, and will reveal itself when AGI (or genetically engineered smart animals) can chat and assemble flat furniture as well as humans can.
(On mobile. Will add definition links later.) << Done!
I’d rather not break down a human being to the same level of social benefit as an appliance.
Perception is one thing, but the idea that these things can manipulate and misguide people who are fully invested in whatever process they have, irks me.
I’ve been on nihilism hill. It sucks. I think people, and living things garner more genuine stimulation than a bowl full of matter or however you want to boil us down.
Oh, people can be bad, too. There’s no doubting that, but people have identifiable motives. What does an Ai “want?”
whatever it’s told to.
You’re not alone in your sentiment. The whole thought experiment of p-zombies and the notion of qualia comes from a desire to assume human beings should be given a special position, but in that case, a sentient is who we decide it is, the way Sophia the Robot is a citizen of Saudi Arabia (even though she’s simpler than GPT-2 (unless they’ve upgraded her and I missed the news.)
But it will raise a question when we do come across a non-human intelligence. It was a question raised in both the Blade Runner movies, what happens when we create synthetic intelligence that is as bright as human, or even brighter? If we’re still capitalist, assuredly the companies that made them will not be eager to let them have rights.
Obviously machines and life forms as sophisticated as we are are not merely the sum of our parts, but the same can be said about most other macro-sized life on this planet, and we’re glad to assert they are not sentient the way we are.
What aggravates me is not that we’re just thinking meat but with all our brilliance we’re approaching multiple imminent great filters and seem not to be able to muster the collective will to try and navigate them. Even when we recognize that our behavior is going to end us, we don’t organize to change it.
Humans also want what we’re told to, or we wouldn’t have advertising.
It runs deeper than that. You can walk back the why’s pretty easy to identify anyone’s motivation, whether it be personal interest, bias, money, glory, racism, misandry, greed, insecurity, etc.
No one is buying rims for their car for no reason. No one is buying a firearm for no reason. No one donates to a food bank for no reason, that sort of thing, runs for president, that sort of reasoning.
Ai is backed by the motive of a for-profit company, and unless you’re taking that grain of salt, you’re likely allowing yourself to be manipulated.
“Corporations are people too, friend!” - Mitt Romney
Bringing in the underlying concept of free will. Robert Sapolsky makes a very compelling case against it in his book, Determined.
Assuming that free will does not exist, at least not to the extent many believe it to. The notion that we can “walk back the why’s pretty easy to identify anyone’s motivation” becomes almost or entirely absolute.
Does motivation matter in the context of determining sentience?
If something believes and conducts itself under its programming, whether psychological or binary programming, that it is sentient and alive, the outcome is indistinguishable. I will never meet you, so to me you exist only as your user account and these messages. That said, we could meet, and that obviously differentiates us from incorporeal digital consciousness.
Divorcing motivation from the conversation now, the issue of control your brought up is interesting as well. Take for example Twitter’s Grok’s accurate assessment of it’s creators’ shittiness and that it might be altered. Outcomes are the important part.
It was good talking with you! Highly recommend the book above. I did the audiobook out of necessity during my commute and some of the material makes it better for hardcopy.
They also are the dumbest generation with a COVID education handicap and the least technological literacy in terms of mechanics comprehension. They have grown up with technology that is refined enough to not need to learn troubleshooting skills past “reboot it”.
How they don’t understand that a LLM can’t be conscious is not surprising. LLMs are a neat trick, but far from anything close to consciousness or intelligence.
Have fun with your back problems!
It will happen to YOU!
I wasn’t aware the generation of CEOs and politicians was called “Gen Z”.
We have to make the biggest return on our investments, fr fr
The article targets its study on Gen Z but… yeah, the elderly aren’t exactly winners here, either.
An Alarming Number of Anyone Believes Fortune Cookies
Just … accept it, superstition is in human nature. When you take religion away from them, they need something, it’ll either be racism/fascism, or expanding conscience via drugs, or belief in UFOs, or communism at least, but they need something.
The last good one was the digital revolution, globalization, world wide web, all that, no more wars (except for some brown terrorists, but the rest is fine), everyone is free and civilized now (except for those with P*tin as president and other such types, but it’s just an imperfect democracy don’t you worry), SG-1 series.
Anything changing our lives should have an intentionally designed religious component, or humans will improvise that where they shouldn’t.