Respectfully, none of the aforementioned examples are simple, or else humans wouldn’t have needed to leverage AI to make such substantial progress in less than 2 years.
None of the ones that actually work resemble intelligence. They’re basic language skills by a tool that has no path to anything that has anything in common with intelligence. There’s plenty you can do algorithmically if you’re willing to lose a lot of money for every individual usage.
And again, several of them are egregious lies about shit that is actually worse than nothing.
Actual researchers aren’t the ones lying about LLMs. It’s exclusively corporate people and people who have left research for corporate paychecks playing make believe that they resemble intelligence.
That said, the academic research space is also a giant mess and you should also take even peer reviewed papers with a grain of salt, because many can’t be replicated and there is a good deal of actual fraud.
Respectfully, none of the aforementioned examples are simple, or else humans wouldn’t have needed to leverage AI to make such substantial progress in less than 2 years.
None of the ones that actually work resemble intelligence. They’re basic language skills by a tool that has no path to anything that has anything in common with intelligence. There’s plenty you can do algorithmically if you’re willing to lose a lot of money for every individual usage.
And again, several of them are egregious lies about shit that is actually worse than nothing.
At what point do you think that your opinion on AI trumps the papers and studies of researchers in those fields?
Actual researchers aren’t the ones lying about LLMs. It’s exclusively corporate people and people who have left research for corporate paychecks playing make believe that they resemble intelligence.
That said, the academic research space is also a giant mess and you should also take even peer reviewed papers with a grain of salt, because many can’t be replicated and there is a good deal of actual fraud.