I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.
Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.
That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).
One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.
What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.
Usually the reason we want people to stop calling LLMs AI is because there has been a giant marketing machine constructed designed to (and successfully) tricking laymen into believing that LLMs are adjacent to and one tiny breakthrough away from becoming AGI.
From another angle, your statement that AI is not a specific term is correct. Why, then, should we keep using it in common parlance when it just serves to confuse laymen? Let’s just use the more specific terms.
So… not intelligent. In the sense that when someone without enough knowledge of computers and/or LLMs hears “LLM is intelligent” and sees “an LLM tells me X”, they will be likely to believe that X is true, and not without a reason. Exactly this is my main reason against all the use of intelligence-related terms. When spoken by knowledgeable people who do know the difference - yeah, I am all for that. But first we need to cut the crap of advertisement and hype
So… not intelligent.
But they are intelligent - just not in the way people tend to think.
There’s nothing inherently wrong with avoiding certain terminology, but I’d caution against deliberately using incorrect terms, because that only opens the door to more confusion. It might help when explaining something one-on-one in private, but in an online discussion with a broad audience, you should be precise with your choice of words. Otherwise, you end up with what looks like disagreement, when in reality it’s just people talking past each other - using the same terms but with completely different interpretations.
But they are intelligent - just not in the way people tend to think.
Doesn’t that just degenerate into a debate over semantics though? Ie what is “intelligence”.
Not having a go, this is a good thread, and useful I think 👍
Yes, and that has always been the debate
But the short answer is that we don’t really have a good grasp at what intelligence is, so it is all semantics in the end
Great point, thank you:)
They ain’t intelligent
What they’re not designed to do is give factual answers
or mental health therapy
To add to the confusion, you also have people out there thinking it’s “Al” or “A1”. It’s a real mess.
I can’t wait to see what A2 can do!
We’ve been waiting for that since 1824!
Really? Like the steak sauce? I guess I should have seen that coming since the 00s motorcycle communities keep asking about their F1 light. Fuel 1njection
Nobody in a position of any importance, just the US Secretary of Education Linda McMahon.
I still think intelligence is a marketing term or simply a misnomer. It’s basically an advanced calculator. Intelligence questions, creates rules from nothing, transforms raw data from reality into ideas, has its own volition… And the same goes for a chess engine, of course, it’s just more visible because it’s not spitting out text but chess moves. Intelligence and consciousness don’t seem to be computational processes.
You’re describing intelligence more like a soul than a system - something that must question, create, and will things into existence. But that’s a human ideal, not a scientific definition. In practice, intelligence is the ability to solve problems, generalize across contexts, and adapt to novel inputs. LLMs and chess engines both do that - they just do it without a sense of self.
A calculator doesn’t qualify because it runs “fixed code” with no learning or generalization. There’s no flexibility to it. It can’t adapt.
Not just human but many other animals too, the only group of entities we have ever used the term ‘intelligence’ for. It could be an entirely physical process, sure (doesn’t imply replication but at least holds a hopeful possibility). I’m not gonna lie and say I understand the ins and outs of these bots, I’m definitely more ignorant on the subject than not, but I don’t see how the word intelligence applies in earnest here. Handheld calculators are programmed to “solve problems” based on given rules too… dynamic code and other advances don’t change the fact that they’re the same logic-gate machine at their core. Having said that, I’m sure they have their uses (idk if they’re worth harming the planet for them with the amount of energy they consume!), I’m just not the biggest fan of the semantics.
There’s also a philosophical definition, which I think is hotly contested so depending on your school of thought your belief of is LLM AI can vary. Usually many people take issue with the thought over questions like does it have a mind, think, or have consciousness?
Very good explanation. And important distinctions.
What would you call systems that are used for discovery of new drugs or treatments? For example, companies using “AI” for Parkinson’s research.
Both that and LLMs fall under the umbrella of machine learning, but they branch in different directions. LLMs are optimized for generating language, while the systems used in drug discovery focus on pattern recognition, prediction, and simulations. Same foundation - different tools for different jobs.
Is there a specific name? Or just “non-LLM ML systems”?
They’re generally just referred to as “deep learning” or “machine learning”. The models themselves usually have names of their own, such as AlphaFold, PathAI and Enlitic.
Does that include systems used for “correlation science”? Things like “people that are left-handed and eat sardines are more likely to develop eyebrow cancer”. Also genetic correlations for odd things like musical talent?
Edit: in other words, searches that look for correlations in hundreds of thousands of parameters.
The way you describe LLM sounds exactly like a large portion of humans I see.
None of this is AI if it doesn’t have the ability to become self-aware.
Consciousness - or “self-awareness” - has never been a requirement for something to qualify as artificial intelligence. It’s an important topic about AI, sure, but it’s a separate discussion entirely. You don’t need self-awareness to solve problems, learn patterns, or outperform humans at specific tasks - and that’s what intelligence, in this context, actually means.
It’s not really solving problems or learning patterns now, is it? I don’t see it getting past any captchas or answering health questions accurately, so we’re definitely not there.
If you’re talking about LLMs, then you’re judging the tool by the wrong metric. They’re not designed to solve problems or pass captchas - they’re designed to generate coherent, natural-sounding text. That’s the task they’re trained for, and that’s where their narrow intelligence lies.
The fact that people expect factual accuracy or problem-solving ability is a mismatch between expectations and design - not a failure of the system itself. You’re blaming the hammer for not turning screws.
Fair point 😅
That’s not quite right, the discussion of consciousness, mind, and reasoning are all relevant and have been in the philosophy of artificial intelligence for hundreds of years. You are valid to call it AI within your definitions but those are not exactly agreed on, such as whether you ascribe to Alan Turing or John Searle, for example
deleted by creator
The issue here is that machine learning also falls under the umbrella of AI.
This visual is a bit misleading. LLMs are not a subset of genAI and they aren’t really comparable, because LLMs refer to a vague model type (usually transformers with hundreds of millions of parameters) and genAI is a buzzword for the task of language generation. LLMs can be fine tuned for a variety of other tasks, like sequence and token classification, and there are other model architectures that can do language generation.
Unrelated, but it’s disappointing how marketing and hype lead to so much confusion and information muddying. Even Wikipedia declaratively states that the most capable LLMs are generative, which academically is simply not the case.
Source: computational linguist who works on LLMs
deleted by creator
When someone online claims that LLMs aren’t AI, my immediate response is to ask them to prove they are a real and intelligent life form. It turns out proving you are real is pretty damned hard when it boils down to it. LLMs may be narrow AI, but humans are pretty narrow in our thinking as well.
I started a project back in January. It’s not ready for the public yet, but I’m planning for a early September release. Initially I don’t think it will be capable of much, but I’m going to be training it on various datasets in hopes that it is able to pick up on the basics fairly quickly. Over the next few years I’m aiming to train it on verbal communication and limited problem solving, as well as working on refining motor skills for interaction with its environment. After that, I’ll be handing it off regularly to professionals who have a lot more experience than me when it comes to training. Of course, I’ll still have my own input, but I’ll be relying a lot on the expertise of others for training data. It’s going to be a slow process, but my long term goal is a world wide release sometime in 2043, or maybe 2044, with some limited exposure before then. Of course, the training process never ends and new data is always becoming available, so I expect that to continue well beyond 2044.
deleted by creator
I’m describing raising a child…
I am deleting my precaffenated post in shame.
RIP. Nothing to see here!