

It was supposed to be a joke. They live around 2 to 4 years.
Freedom is the right to tell people what they do not want to hear.
It was supposed to be a joke. They live around 2 to 4 years.
I’ve had the same gerbil for almost 30 years. I doubt I’d notice if someone swapped it into identical colored one in the middle of the night.
Judging by the comments here I’m getting the impression that people would like to rather provide a selfie or ID.
And as a self-employed I only know how much I earned at the end of the year which could be wildly different than the year before.
It’s actually the opposite of a very specific definition - it’s an extremely broad one. “AI” is the parent category that contains all the different subcategories, from the chess opponent on an old Atari console all the way up to a hypothetical Artificial Superintelligence, even though those systems couldn’t be more different from one another.
It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.
Claims like this just create more confusion and lead to people saying things like “LLMs aren’t AI.”
LLMs are intelligent - just not in the way people think.
Their intelligence lies in their ability to generate natural-sounding language, and at that they’re extremely good. Expecting them to consistently output factual information isn’t a failure of the LLM - it’s a failure of the user’s expectations. LLMs are so good at generating text, and so often happen to be correct, that people start expecting general intelligence from them. But that’s never what they were designed to do.
There are plenty of similarities in the output of both the human brain and LLMs, but overall they’re very different. Unlike LLMs, the human brain is generally intelligent - it can adapt to a huge variety of cognitive tasks. LLMs, on the other hand, can only do one thing: generate language. It’s tempting to anthropomorphize systems like ChatGPT because of how competent they seem, but there’s no actual thinking going on. It’s just generating language based on patterns and probabilities.
Large language models aren’t designed to be knowledge machines - they’re designed to generate natural-sounding language, nothing more. The fact that they ever get things right is just a byproduct of their training data containing a lot of correct information. These systems aren’t generally intelligent, and people need to stop treating them as if they are. Complaining that an LLM gives out wrong information isn’t a failure of the model itself - it’s a mismatch of expectations.
Apparently I’m old enough to be a Lemmy user’s dad.
Find an ETF index fund that’s highly diversified across both sectors and regions, with total expenses under 0.5%, and set up an automatic monthly investment into it. It’s the boring way to invest - but unless you’ve got a crystal ball and can predict the future, I wouldn’t start gambling on individual stocks. This is basically the same advice Warren Buffett would give you.
The few things I’m not buying out of principle are such that I wouldn’t even know if someone else bought it or not. But no, I don’t care. There’s nothing I’m not buying because I think the company that produces it is literally Hitler.
I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.
I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.
Did you genuinely not understand the point I was making, or are you just being pedantic? “Silicon” obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as “in non-biological substrates,” I’m happy to oblige - but I have a feeling you already knew that.
Älä välitä, ei se villekään välittänyt, vaikka sen väliaikaiset välihousut jäi väliaikaisen välitystoimiston väliaikaisen välioven väliin.
Rough translation: Don’t worry about it - Ville didn’t worry either when his temporary long johns got caught in the temporary side door of the temporary temp agency.
We’re not even remotely close.
That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.
Don’t confuse AGI with LLMs. Both being AI systems is the only thing they have in common. They couldn’t be further apart when it comes to cognitive capabilities.
The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:
Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,
Or we wipe ourselves out before we get the chance.
Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That’s what humans do; improve our technology.
The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.
They’re generally just referred to as “deep learning” or “machine learning”. The models themselves usually have names of their own, such as AlphaFold, PathAI and Enlitic.
If you’re genuinely interested in what “artificial superintelligence” (ASI) means, you can just look it up. Zuckerberg didn’t invent the term - it’s been around for decades, popularized lately by Nick Bostrom’s book Superintelligence.
The usual framing goes like this: Artificial General Intelligence (AGI) is an AI system with human-level intelligence. Push it beyond human level and you’re talking about Artificial Superintelligence - an AI with cognitive abilities that surpass our own. Nothing mysterious about it.