We’re not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.
Irrelevant at best, harmful at worst 🤷
“Dude trust me, just give me 40 billion more dollars, lobby for complete deregulation of the industry, and get me 50 more petabytes of data, then we will have a little human in the computer! RealshitGPT will have human level intelligence!”
We’re not even remotely close.
That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.
That’s true in a somewhat abstract way, but I just don’t see any evidence of the claim that it is just around the corner. I don’t see what currently existing technology can facilitate it. Faster-than-light travel could also theoretically be just around the corner, but it would surprise me if it was, because we just don’t have the technology.
On the other hand, the people who push the claim that AGI is just around the corner usually have huge vested interests.
In some dimensions, current day LLMs are already superintelligent. They are extremely good knowledge retrieval engines that can far outperform traditional search engines, once you learn how properly to use them. No, they are not AGIs, because they’re not sentient or self-motivated, but I’m not sure those are desirable or useful dimensions of intellect to work towards anyway.
I think that’s a very generous use of the word “superintelligent”. They aren’t anything like what I associate with that word anyhow.
I also don’t really think they are knowledge retrieval engines. I use them extensively in my daily work, for example to write emails and generate ideas. But when it comes to facts they are flaky at best. It’s more of a free association game than knowledge retrieval IMO.
How do you know we’re not remotely close to AGI? Do you have any expertise on the issue? And expertise is not “I can download Python libraries and use them” it is “I can explain the mathematics behind what is going on, and understand the technical and theoretical challenges”.
Do you have any expertise on the issue?
I hold a PhD in probabilistic machine learning and advise businesses on how to use AI effectively for a living so yes.
IMHO, there is simply nothing indicating that it’s close. Sure LLMs can do some incredibly clever sounding word-extrapolation, but the current “reasoning models” still don’t actually reason. They are just LLMs with some extra steps.
There is lots of information out there on the topic so I’m not going to write a long justification here. Gary Marcus has some good points if you want to learn more about what the skeptics say.
Gary Marcus is certainly good. It’s not as if I think say, LeCun, or any of the many people who think that LLMs aren’t the way are morons. I don’t think anyone thinks all the problems are currently solved. And I think long time lines are still plausible, but, I think dismissing short time line out of hand is thoughtless.
My main gripe is how certain people are about things they know virtually nothing about. And how slap dashed their reasoning is. It seems to me most people’s reasoning goes something like “there is no little man in the box, it’s just math, and math can’t think.” Of course, they say it with a lot fancier words, like “it’s just gradient decent” as if human brains couldn’t have gradient decent baked in anywhere.
But, out of interest what is your take on the Stochastic Parrot? I find the arguments deeply implausible.
I’m not saying that we can’t ever build a machine that can think. You can do some remarkable things with math. I personally don’t think our brains have baked in gradient descent, and I don’t think neural networks are a lot like brains at all.
The stochastic parrot is a useful vehicle for criticism and I think there is some truth to it. But I also think LMMs display some super impressive emergent features. But I still think they are really far from AGI.
So, how would you define AGI, and what sorts of tasks require reasoning? I would have thought earning the gold medal on the IMO would have been a reasoning task, but I’m happy to learn why I’m wrong.
I definitely think that’s remarkable. But I don’t think scoring high on an external measure like a test is enough to prove the ability to reason. For reasoning, the process matters, IMO.
Reasoning models work by Chain-of-Thought which has been shown to provide some false reassurances about their process https://arxiv.org/abs/2305.04388 .
Maybe passing some math test is enough evidence for you but I think it matters what’s inside the box. For me it’s only proved that tests are a poor measure of the ability to reason.
I’m sorry, but this reads to me like “I am certain I am right, so evidence that implies I’m wrong must be wrong.” And while sometimes that really is the right approach to take, more often than not you really should update the confidence in your hypothesis rather than discarding contradictory data.
But, there must be SOMETHING which is a good measure of the ability to reason, yes? If reasoning is an actual thing that actually exists, then it must be detectable, and there must be a way to detect it. What benchmark do you purpose?
You don’t have to seriously answer, but I hope you see where I’m coming from. I assume you’ve read Searle, and I cannot express to you the contempt in which I hold him. I think, if we are to be scientists and not philosophers (and good philosophers should be scientists too) we have to look to the external world to test our theories.
For me, what goes on inside does matter, but what goes on inside everyone everywhere is just math, and I haven’t formed an opinion about what math is really most efficient at instantiating reasoning, or thinking, or whatever you want to talk about.
To be honest, the other day I was convinced it was actually derivatives and integrals, and, because of this, that analog computers would make much better AIs than digital computers. (But Hava Siegelmann’s book is expensive, and, while I had briefly lifted my book buying moratorium, I think I have to impose it again).
Hell, maybe Penrose is right and we need quantum effects (I really really really doubt it, but, to the extent that it is possible for me, I try to keep an open mind).
🤷♂️
I’m not sure I can give a satisfying answer. There are a lot of moving parts here, and a big issue here is definitions which you also touch upon with your reference to Searle.
I agree with the sentiment that there must be some objective measure of reasoning ability. To me, reasoning is more than following logical rules. It’s also about interpreting the intent of the task. The reasoning models are very sensitive to initial conditions and tend to drift when the question is not super precise or if they don’t have sufficient context.
The AI models are in a sense very fragile to the input. Organic intelligence on the other hand is resilient and also heuristic. I don’t have any specific idea for the test, but it should test the ability to solve a very ill-posed problem.
I think we also should require to set some energy limits to those tests. Before it was assumed that those tests are done by humans, that can do those tests after eating some crackers and a bit of water.
Now we are comparing that to massive data centers that need nuclear reactors to have enough power to work through these problems…
Part of this is a debate on what the definition of intelligence and/or consciousness is, which I am not qualified to discuss. (I say “discuss” instead of “answer” because there is not an agreed upon answer to either of those.)
That said, one of the main purposes of AGI would be able to learn novel subject matter, and to come up with solutions to novel problems. No machine learning tool we have created so far is capable of that, on a fundamental level. They require humans to frame their training data by defining what the success criteria is, or they spit out the statistically likely human-like response based on all of the human-generated content they’ve consumed.
In short, they cannot understand a concept that humans haven’t yet understood, and can only echo solutions that humans have already tried.
I don’t see why AGI must be conscious, and the fact that you even bring it up makes me think you haven’t thought too hard about any of this.
When you say “novel answers” what is it you mean? The questions on the IMO have never been asked to any human before the Math Olympiad, and almost all humans cannot answer those quesion.
Why does answering those questions not count as novel? What is a question whose answer you would count as novel, and which you yourself could answer? Presuming that you count yourself as intelligent.
The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:
-
Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,
-
Or we wipe ourselves out before we get the chance.
Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That’s what humans do; improve our technology.
The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.
something that cannot, even in principle, be replicated in silicon
As if silicon were the only technology we have to build computers.
Did you genuinely not understand the point I was making, or are you just being pedantic? “Silicon” obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as “in non-biological substrates,” I’m happy to oblige - but I have a feeling you already knew that.
And why is “non-biological” a limitation?
I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.
I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.
I personally think that the additional component (suppose it’s energy) that modern approaches miss is the sheer amount of entropy a human brain gets - plenty of many times duplicated sensory signals with pseudo-random fluctuations. I don’t know how one can use lots of entropy to replace lots of computation (OK, I know what Monte-Carlo method is, just how it applies to AI), but superficially this seems to be the way that will be taken at some point.
On your point - I agree.
I’d say we might reach AGI soon enough, but it will be impractical to use as compared to a human.
While the matching efficiency is something very far away, because a human brain has undergone, so to say, an optimization\compression taking the energy of evolution since the beginning of life on Earth.
-
Human level? That’s not setting the bar very high. Surely the aim would be to surpass human, or why bother?
Why would we want to? 99% of the issues people have with “AI” are just problems with society more broadly that AI didn’t really cause, only exacerbated. I think it’s absurd to just reject this entire field because of a bunch of shitty fads going on right now with LLMs and image generators.
It’s just a cash grab to take peoples jobs and give it to a chat bot that’s fed Wikipedia’s data on crack.
Don’t confuse AGI with LLMs. Both being AI systems is the only thing they have in common. They couldn’t be further apart when it comes to cognitive capabilities.