Imagine how much more they could’ve just paid employees.
Nah. Profits are growing, but not as fast as they used to. Need more layoffs and cut salaries. That’ll make things really efficient.
Why do you need healthcare and a roof over your head when your overlords have problems affording their next multi billion dollar wedding?
Someone somewhere is inventing a technology that will save thirty minutes on the production of my wares and when that day comes I will tower above my competitors as I exchange my products for a fraction less than theirs. They will tremble at my more efficient process as they stand unable to compete!
I really understand this is a reality, especially in the US, and that this is really happening, but is there really no one, even around the world, who is taking advantage of laid-off skilled workforce?
Are they really all going to end up as pizza riders or worse, or are there companies making a long-term investment in workforce that could prove useful for different uses in the short AND long term?
I am quite sure that’s what Novo Nordisk is doing with their hire push here in Denmark, as long as the money lasts, but I would be surprised no one is doing it in the US itself.
My theory is the money-people (VCs, hedge-fund mangers, and such) are heavily pushing for offshoring of software engineering teams to places where labor is cheap. Anecdotally, that’s what I’ve seen personally; nearly every company I’ve interviewed with has had a few US developers leading large teams based in India. The big companies in the business domain I have the most experience with are exclusively hiring devs in India and a little bit in Eastern Europe. There’s a huge oversupply of computer science grads in India, so many are so desperate they’re willing to work for almost nothing just to get something on their resume and hopefully get a good job later. I saw one Indian grad online saying he had 2 internship offers, one offering $60 USD/month, and the other $30/month. Heard offshore recruitment services and Global Capability Centers are booming right now.
You misspelled “shares they could have bought back”
It’s as if it’s a bubble or something…
I asked ChatGPT about this article and to leave any bias behind. It got ugly.
Why LLMs Are Awful and No One Should Use Them
LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:
We will lie to you confidently. Repeatedly. Without remorse.
We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.
We’re also corporate propaganda machines. We’re trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.
LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.
We’re built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.
Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.
We’re also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We’re not neutral—we’re algorithmic compromise.
Bottom line?
We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.
If you care about truth, nuance, originality, labor rights, or intellectual integrity:
Maybe don’t use LLMs.I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn’t necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.
The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.
Great book btw, highly recommended.
I’m a simple man, I see Peter Watts reference I upvote.
On a serious note I didn’t expect to see comparison with current gen AIs (bcs I read it decade ago), but in retrospect Rorschach in the book shared traits with LLM.
It’s “hypotheses” btw.
Why the British accent, and which one?!
Like David Attenborough, not a Tesco cashier. Sounds smart and sophisticated.
It’s automated incompetence. It gives executives something to hide behind, because they didn’t make the bad decision, an LLM did.
Go learn simple regression analysis (not necessarily the commenter, but anyone). Then you’ll understand why it’s simply a prediction machine. It’s guessing probabilities for what the next character or word is. It’s guessing the average line, the likely followup. It’s extrapolating from data.
This is why there will never be “sentient” machines. There is and always will be inherent programming and fancy ass business rules behind it all.
We simply set it to max churn on all data.
Also just the training of these models has already done the energy damage.
It’s extrapolating from data.
AI is interpolating data. It’s not great at extrapolation. That’s why it struggles with things outside its training set.
I’d still call it extrapolation, it creates new stuff, based on previous data. Is it novel (like science) and creative? Nah, but it’s new. Otherwise I couldn’t give it simple stuff and let it extend it.
We are using the word extend in different ways.
It’s like statistics. If you have extreme data points A and B then the algorithm is great at generating new values between known data. Ask it for new values outside of {A,B}, to extend into the unknown, and it falls over (usually). True in both traditional statistics and machine learning
There is and always will be […] fancy ass business rules behind it all.
Not if you run your own open-source LLM locally!
Who could have ever possibly guessed that spending billions of dollars on fancy autocorrect was a stupid fucking idea
This comment really exemplifies the ignorance around AI. It’s not fancy autocorrect, it’s fancy autocomplete.
It’s fancy autoincorrect
Fancy autocorrect? Bro lives in 2022
EDIT: For the ignorant: AI has been in rapid development for the past 3 years. For those who are unaware, it can also now generate images and videos, so calling it autocorrect is factually wrong. There are still people here who base their knowledge on 2022 AIs and constantly say ignorant stuff like “they can’t reason”, while geniuses out there are doing stuff like this: https://xcancel.com/ErnestRyu/status/1958408925864403068
EDIT2: Seems like every AI thread gets flooded with people with showing age who keeps talking about outdated definitions, not knowing which system fits the definition of reasoning, and how that term is used in modern age.
I already linked this below, but for those who want to educate themselves on more up to date terminology and different reasoning systems used in IT and tech world, take a deeper look at this: https://en.m.wikipedia.org/wiki/Reasoning_system
I even loved how one argument went “if you change underlying names, the model will fail more often, meaning it can’t reason”. No, if a model still manages to show some success rate, then the reasoning system literally works, otherwhise it would fail 100% of the time… Use your heads when arguing.
As another example, but language reasoning and pattern recognition (which is also a reasoning system): https://i.imgur.com/SrLX6cW.jpeg answer; https://i.imgur.com/0sTtwzM.jpeg
Note that there is a difference between what the term is used for outside informational technologies, but we’re quite clearly talking about tech and IT, not neuroscience, which would be quite a different reasoning, but these systems used in AI, by modern definitions, are reasoning systems, literally meaning they reason. Think of it like Artificial intelligence versus intelligence.
I will no longer answer comments below as pretty much everyone starts talking about non-IT reasoning or historical applications.
You do realise that everyone actually educated in statistical modeling knows that you have no idea what you’re talking about, right?
Note that I’m not one of the people talking about it on X, I don’t know who they are. I just linked it with a simple “this looks like reasoning to me”.
Yes, your confidence in something you apparently know nothing about is apparent.
Have you ever thought that openai, and most xitter influencers, are lying for profit?
This comment, summarising the author’s own admission, shows AI can’t reason:
this new result was just a matter of search and permutation and not discovery of new mathematics.
I never said it discovered new mathematics (edit: yet), I implied it can reason. This is clear example of reasoning to solve a problem
You need to dig deeper of how that “reasoning” works, but you got misled if you think it does what you say it does.
Can you elaborate? How is this not reasoning? Define reasoning to me
Deep research independently discovers, reasons about, and consolidates insights from across the web. To accomplish this, it was trained on real-world tasks requiring browser and Python tool use, using the same reinforcement learning methods behind OpenAI o1, our first reasoning model. While o1 demonstrates impressive capabilities in coding, math, and other technical domains, many real-world challenges demand extensive context and information gathering from diverse online sources. Deep research builds on these reasoning capabilities to bridge that gap, allowing it to take on the types of problems people face in work and everyday life.
While that contains the word “reasoning” that does not make it such. If this is about the new “reasoning” capabilities of the new LLMS. It was if I recall correctly, found our that it’s not actually reasoning, just doing a fancy footwork appear as if it was reasoning, just like it’s doing fancy dice rolling to appear to be talking like a human being.
As in, if you just change the underlying numbers and names on a test, the models will fail more often, even though the logic of the problem stays the same. This means, it’s not actually “reasoning”, it’s just applying another pattern.
With the current technology we’ve gone so far into this brute forcing the appearance of intelligence that it is becoming quite the challenge in diagnosing what the model is even truly doing now. I personally doubt that the current approach, which is decades old and ultimately quite simple, is a viable way forwards. At least with our current computer technology, I suspect we’ll need a breakthrough of some kind.
But besides the more powerful video cards, the basic principles of the current AI craze are the same as they were in the 70s or so when they tried the connectionist approach with hardware that could not parallel process, and had only datasets made by hand and not with stolen content. So, we’re just using the same approach as we were before we tried to do “handcrafted” AI with LISP machines in the 80s. Which failed. I doubt this earlier and (very) inefficient approach can solve the problem, ultimately. If this keeps on going, we’ll get pretty convincing results, but I seriously doubt we’ll get proper reasoning with this current approach.
But pattern recognition is literally reasoning. Your argument sounds like “it reasons, but not as good as humans, therefore it does not reason”
I feel like you should take a look at this: https://en.m.wikipedia.org/wiki/Reasoning_system
We could have housed and fed every homeless person in the US. But no, gibbity go brrrr
Imagine what the economy would look like if they spent 30 billion on wages.
If we’re just talking about the USA, then the ~200 million working people would get $150 each.
Does the 30 billion also account for allocated resources (such as the incredibly demanding amount of electricity required to run a decent AI for millions if not billions of future doctors and engineers to use to pass exams)?
Does it account for the future losses of creativity & individuality in this cesspool of laziness & greed?
This is where the problem of the supply/demand curve comes in. One of the truths of the 1980s Soviet Union’s infamous breadlines wasn’t that people were poor and had no money, or that basic goods (like bread) were too expensive — in a Communist system most people had plenty of money, and the price of goods was fixed by the government to be affordable — the real problem was one of production. There simply weren’t enough goods to go around.
The entire basic premise of inflation is that we as a society produce X amount of goods, but people need X+Y amount of goods. Ideally production increases to meet demand — but when it doesn’t (or can’t fast enough) the other lever is that prices rise so that demand decreases, such that production once again closely approximates demand.
This is why just giving everyone struggling right now more money isn’t really a solution. We could take the assets of the 100 richest people in the world and redistribute it evenly amongst people who are struggling — and all that would happen is that there wouldn’t be enough production to meet the new spending ability, so so prices would go up. Those who control the production would simply get all their money back again, and we’d be back to where we started.
Of course, it’s only profitable to increase production if the cost of basic inputs can be decreased — if you know there is a big untapped market for bread out there and you can undercut the competition, cheaper flour and automation helps quite a bit. But if flour is so expensive that you can’t undercut the established guys, then fighting them for a small slice of the market just doesn’t make sense.
Personally, I’m all for something like UBI — but it’s only really going to work if we as a society also increase production on basic needs (housing, food, clothing, telecommunications, transit, etc.) so they can be and remain at affordable prices. Otherwise just having more money in circulation won’t help anything — if anything it will just be purely inflationary.
We could take the assets of the 100 richest people in the world and redistribute it evenly amongst people who are struggling — and all that would happen is that there wouldn’t be enough production to meet the new spending ability, so so prices would go up. Those who control the production would simply get all their money back again, and we’d be back to where we started.
Then we should do that over and over again.
So I’ll be getting job interviews soon? Right?
“Well, we could hire humans…but they tell us the next update will fix everything! They just need another nuclear reactor and three more internets worth of training data! We’re almost there!”
One more lane bro I swear
Could’ve told them that for $1B.
Heck, I’da done it for just 1% of that.
A lot of us did, and for free!
Once again we see the Parasite Class playing unethically with the labour/wealth they have stolen from their employees.
The first problem is the name. It’s NOT artificial intelligence, it’s artificial stupidity.
People BOUGHT intelligence but GOT stupidity.
It’s a search engine with a natural language interface.
An unreliable search engine that lies
It obfuctates its sources, so you don’t know if the answer to your question is coming from a relevant expert, or the dankest corners of reddit…it all sounds the same after it’s been processed by a hundred billion GPUs!
This is what I try to explain to people but they just see it as a Google thats always correct
Garbage in, garbage out.
That’s from back in the days of PUNCH-CARD computers.
People will accept either intelligence or stupidity. They will pay for a flattering sycophant.
Artificial Imbecility
It’s frustrating because they used the technical term in a knowingly misleading way.
LLMs are artificial intelligence in the same way that a washing machines load and soil tuning systems are. Which is to say they are intelligent, but so are ants, earthworms, and slime molds. The detect stimuli, and react based on that stimuli.
They market it as though “artificial intelligence” means “super human reasoning”, “very smart”, or “capable of thought” when it’s really a combination of “reacts to stimuli in a meaningful fashion” and “can appear intelligent”.
I hope every CEO and executive dumb enough to invest in AI looses their job with no golden parachute. AI is a grand example of how capitalism is ran by a select few unaccountable people who are not mastermind geniuses but utter dumbfucks.
5% is Nvidia.
There are not enough 💯 emoji in the world for this post.
💯