It’s fun to say that artificial intelligence is fake and sucks — but evidence is mounting that it’s real and dangerous
In aggregate, though, and on average, they’re usually right. It’s not impossible that the tech industry’s planned quarter-trillion dollars of spending on infrastructure to support AI next year will never pay off. But it is a signal that they have already seen something real.
The market is incredibly irrational and massive bubbles happen all the time.
The number of users when all the search engines are forcibly injecting it in every search (and hemorrhaging money to do it)? Just as dumb.
Any thoughts on the paragraph following your excerpt:
The most persuasive way you can demonstrate the reality of AI, though, is to describe how it is already being used today. Not in speculative sci-fi scenarios, but in everyday offices and laboratories and schoolrooms. And not in the ways that you already know — cheating on homework, drawing bad art, polluting the web — but in ones that feel surprising and new.
With that in mind, here are some things that AI has done in 2024.
- Cut customer losses from scams in half through proactive detection, according to the Bank of Australia.
- Preserved some of the 200 endangered Indigenous languages spoken in North America.
- Accelerated drug discovery, offering the possibility of breakthrough protections against antibiotic resistance.
- Detected the presence of tuberculosis by listening to a patient’s voice.
- Reproduced an ALS patient’s lost voice.
- Enabled persecuted Venezuelan journalists to resume delivering the news via digital avatars.
- Pieced together fragments of the epic of Gilgamesh, one of the world’s oldest texts.
- Caused hundreds of thousands of people to develop intimate relationships with chatbots.
- Created engaging and surprisingly natural-sounding podcasts out of PDFs.
- Created poetry that participants in a study say they preferred to human-written poetry in a blind test. (This may be because people prefer bad art to good art, but still.)
did you actually just bring that up as a positive?
The author from the article did. It’s a bit of a stretch as are the last 2-3 pieces of the list 🤷🏾♂️. The first few are still pretty big.
Mostly hyping up very simple things?
LLMs don’t add anything vs actively scanning for a handful of basic rules and link scanning. Anything referencing a bank that isn’t on a whitelist of legitimate bank domains in a given country would likely be more effective.
The language stuff is the only parts they’re actually good at.
Chatbots are genuine dogshit, PDF to podcast is genuine dogshit, poetry is genuine dogshit.
Respectfully, none of the aforementioned examples are simple, or else humans wouldn’t have needed to leverage AI to make such substantial progress in less than 2 years.
None of the ones that actually work resemble intelligence. They’re basic language skills by a tool that has no path to anything that has anything in common with intelligence. There’s plenty you can do algorithmically if you’re willing to lose a lot of money for every individual usage.
And again, several of them are egregious lies about shit that is actually worse than nothing.
At what point do you think that your opinion on AI trumps the papers and studies of researchers in those fields?
Actual researchers aren’t the ones lying about LLMs. It’s exclusively corporate people and people who have left research for corporate paychecks playing make believe that they resemble intelligence.
That said, the academic research space is also a giant mess and you should also take even peer reviewed papers with a grain of salt, because many can’t be replicated and there is a good deal of actual fraud.
It’s real and it’s dangerous, but it’s also fake and it sucks.
I honestly doubt I would ever pay for this shit. I’ll use it fine but ive noticed actual serious problematic “hallucinations” that shocked the hell out of me to the point i think it has a hopeless signal/noise problem to the point it could never be serially accurate and trusted
I’ve had two useful applications of “AI”.
One is using it to explain programming frameworks, libraries, and language features. In these cases it’s sometimes wrong or outdated, but it’s easy to test and check to make sure if it’s right. Extremely valuable in this case! It basically just sums up what everybody already said, so it’s easier and more on-point than doing a google search.
The other is writing prompts and getting it to make insane videos. In this case all I want is the hallucinations! It makes some stupid insane stuff. But the novelty wears off quick and I just don’t care any more.
I will say the coding shit is good stuff ironically. But I would still have to run the code and make sure its sound. In terms of anythint citation-wise tho, its completely sus af
It has straight up made up damn citations that I could have come up with to escape interrogation during a panned 4th grade presentation to a skeptical audience
But I would still have to run the code and make sure its sound.
Oh I don’t get it to write code for me. I just get it to explain stuff.
I’ve been using AI to troubleshoot/learn after switching from Windows -> Linux 1.5 years ago. It has given me very poor advice occasionally, but it has taught me a lot more valuable info. This is not dissimilar to my experience following tutorials on the internet…
I honestly doubt I would ever pay for this shit.
I understand your perspective. Personally, I think that there’s a chicken/egg situation where free AI versions are a subpar representation that makes skeptics view AI as a whole as over-hyped. OTOH, the people who use the better models experience the benefits first hand, but are seen as AI zealots that are having the wool pulled over there eyes.
It’s the latest product that everyone will refuse to pay real money once they figure out how useless and stupid it really is. Same bullshit bubble, new cycle.