Thanks to @General_Effort@lemmy.world for the links!
Here’s a link to Caltech’s press release: https://www.caltech.edu/about/news/thinking-slowly-the-paradoxical-slowness-of-human-behavior
Here’s a link to the actual paper (paywall): https://www.cell.com/neuron/abstract/S0896-6273(24)00808-0
Here’s a link to a preprint: https://arxiv.org/abs/2408.10234
The crows were shown how to get the food iir.
My understanding is LLM contain artificial neural networks. A simplification with an amount of weights similar to small animals. A simpler model aught to make investigation more clear 😅
Neural networks are “trained” by adjusting the weights on “neurons”. I assume real brains are training themselves on every input while LLM is limitted to sessions with training data. Do you suspect there could be a though process when it’s processing how many letters are in strawberry? What about when it’s weighs are adjusted during training?
I think whether you call it a thought process or not comes down to definition of what you mean by that. It’s definitely intelligence, and there definitely is a process.
So I wouldn’t have a problem calling it a thought process. But it’s not self consciousness yet. But we may not be very far from it.
It’s amazing the progress that has been achieved the past decade.
When I predicted 2035 as a point where we could possibly achieve strong AI, it was at a point where we’d had 2-3 decades of very little progress. But I’ve always been certain that the human brain is a 100% natural phenomenon, and the function of it can be copied, just like with everything else in nature. And when that is achieved, there will still be room for improvement.
As a natural process, our brain is built on the physical properties of atoms, so IMO it’s only a matter of time before we have an artificial intelligence that is just as valid to call self conscious as ourselves.