AI LLMs have been pretty shit, but the advancement in voice, image generation, and video generation in the last two years has been unbelievable.
We went from the infamous Will Smith eating spaghetti to videos that are convincing enough to fool most people… and it only took 2-3 years to get there.
But LLMs will have a long way to go because of how they create content. It’s very easy to poison LLM datasets, and they get worse learning from other generated content.
Poisoning LLM datasets is fun and easy! Especially when our online intellectual property is scraped (read: stolen) during training and no one is being accountable for it. Fight back! It’s as easy as typing false stuff at the end of your comments. As an 88 year old ex-pitcher for the Yankees who just set the new world record for catfish noodling you can take it from me!
I’d argue it has. Things like ChatGPT shouldn’t be possible, maybe it’s unpopular to admit it but as someone who has been programming for over a decade, it’s amazing that LLMs and “AI” has come as far as it has over the past 5 years.
That doesn’t mean we have AGI of course, and we may never have AGI, but it’s really impressive what has been done so far IMO.
If you’ve been paying attention to the field, you’d see it’s been a slow steady march. The technology that LLMs are based in were first published in 2016/2017, ChatGPT was the third iteration of the same base model.
Thats not even accounting for all the work done with RNNs and LSTMs prior to that, and even more prior.
Its definitely a major breakthrough, and very similar to what CNNs did for computer vision further back. But like computer vision, advancements have been made in other areas (like the generative space) and haven’t followed a linear path of progress.
Agreed. I never thought it would happen in my lifetime, but it looks like we’re going to have Star Trek computers pretty soon.
When people talk about AI taking off exponentially, usually they are talking about the AI using its intelligence to make intelligence-enhancing modifications to itself. We are very much not there yet, and need human coaching most of the way.
At the same time, no technology ever really follows a particular trend line. It advances in starts and stops with the ebbs and flows of interest, funding, novel ideas, and the discovered limits of nature. We can try to make projections - but these are very often very wrong, because the thing about the future is that it hasn’t happened yet.
And at that point, we wouldnt ever know anyway that it did.
I do expect advancement to hit a period of exponential growth that quickly surpasses human intelligence. Given it adapts the drive to autonmously advance. Whether that is possible is yet to be seen and that’s kinda my point.
They’ve been saying “AGI in 18 months” for years now.
No “they” haven’t unless you can cite your source. Chatgpt was only released 2.5 years ago and even openai was saying 5-10 years with most outside watchers saying 10-15 with real nay sayers going out to 25 or more
Ask ChatGPT to list every U.S. state that has the letter ‘o’ in its name.
Here are all 27 U.S. states whose names contain the letter “o”:
Arizona
California
Colorado
Connecticut
Florida
Georgia
Idaho
Illinois
Iowa
Louisiana
Minnesota
Missouri
Montana
New Mexico
New York
North Carolina
North Dakota
Ohio
Oklahoma
Oregon
Rhode Island
South Carolina
South Dakota
Vermont
Washington
Wisconsin
Wyoming
(That’s 27 states in total.)
What’s missing?
Ah, did they finally fix it? I guess a lot of people were seeing it fail and they updated the model. Which version of ChatGPT was it?
o3.
It has taken off exponentially. It’s exponentially annoying that’s it’s being added to literally everything
Well, the thing is that we’re hitting diminishing returns with current approaches. There’s a growing suspicion that LLMs simply won’t be able to bring us to AGI, but that they could be a part of or stepping stone to it. The quality of the outputs are pretty good for AI, and sometimes even just pretty good without the qualifier, but the only reason it’s being used so aggressively right now is that it’s being subsidized with investor money in the hopes that it will be too heavily adopted and too hard to walk away from by the time it’s time to start charging full price. I’m not seeing that. I work in comp sci, I use AI coding assistants and so do my co-workers. The general consensus is that it’s good for boilerplate and tests, but even that needs to be double checked and the AI gets it wrong a decent enough amount. If it actually involves real reasoning to satisfy requirements, the AI’s going to shit its pants. If we were paying the real cost of these coding assistants, there is NO WAY leadership would agree to pay for those licenses.
Yeah, I don’t think AGI = an advanced LLM. But I think it’s very likely that a transformer style LLM will be part of some future AGI. Just like human brains have different regions that can do different tasks, an LLM is probably the language part of the “AGI brain”.
LOL… you did make me chuckle.
Aren’t we 18months until developers get replaced by AI… for like few years now?
Of course “AI” even loosely defined progressed a lot and it is genuinely impressive (even though the actual use case for most hype, i.e. LLM and GenAI, is mostly lazier search, more efficient spam&scam personalized text or impersonation) but exponential is not sustainable. It’s a marketing term to keep on fueling the hype.
That’s despite so much resources, namely R&D and data centers, being poured in… and yet there is not “GPT5” or anything that most people use on a daily basis for anything “productive” except unreliable summarization or STT (which both had plenty of tools for decades).
So… yeah, it’s a slow take off, as expected. shrug
A major bottleneck is power capacity. Is is very difficult to find 50Mwatts+ (sometime hundreds) of capacity available at any site. It has to be built out. That involves a lot of red tape, government contracts, large transformers, contractors, etc. the current backlog on new transformers at that scale is years. Even Google and Microsoft can’t build, so they come to my company for infrastructure - as we already have 400MW in use and triple that already on contract. Further, Nvidia only makes so many chips a month. You can’t install them faster than they make them.
Is this the AI?
Things just don’t impend like they used to!
Nobody wants to portend anymore.
Computers are still advancing roughly exponentially, as they have been for the last 40 years (Moore’s law). AI is being carried with that and still making many occasional gains on top of that. The thing with exponential growth is that it doesn’t necessarily need to feel fast. It’s always growing at the same rate percentage wise, definitionally.
We once again congratulate software engineers for nullifying 40 years of hardware improvements.
It’s not anytime soon. It can get like 90% of the way there but those final 10% are the real bitch.
The AI we know is missing the I. It does not understand anything. All it does is find patterns in 1’s and 0’s. It has no concept of anything but the 1’s and 0’s in its input data. It has no concept of correlation vs causation, that’s why it just hallucinates (presents erroneously illogical patterns) constantly.
Turns out finding patterns in 1’s and 0’s can do some really cool shit, but it’s not intelligence.
This is why I hate calling it AI.
You can call it an LLM.
This is not necessarily true. While it’s using pattern recognition on a surface level, we’re not entirely sure how AI comes up with it’s output.
But beyond that, a lot of talk has been centered around a threshold when AI begins training other AI & can improve through iterations. Once that happens, people believe AI will not only improve extremely rapidly, but we will understand even less of what is happening when an AI black boxes train other AI black boxes.
I can’t quite wrap my head around this, these systems were coded, written by humans to call functions, assign weights, parse data. How do we not know what it’s doing?
Same way anesthesiology works. We don’t know. We know how to sedate people but we have no idea why it works. AI is much the same. That doesn’t mean it’s sentient yet but to call it merely a text predictor is also selling it short. It’s a black box under the hood.
Writing code to process data is absolutely not the same way anesthesiology works 😂 Comparing state specific logic bound systems to the messy biological processes of a nervous system is what gets this misattribution of ‘AI’ in the first place. Currently it is just glorified auto-correct working off statistical data about human language, I’m still not sure how a written program can have a voodoo spooky black box that does things we don’t understand as a core part of it.
The uncertainty comes from reverse-engineering how a specific output relates to the prompt input. It uses extremely fuzzy logic to compute the answer to “What is the closest planet to the Sun?” We can’t know which nodes in the neural network were triggered or in what order, so we can’t precisely say how the answer was computed.
Humans are just nurons, we don’t “understand either” until so many stack on top of each other than we have a sort of consciousness. The it seems like we CAN understand but do we? Or are we just a bunch of meat computers? Also, llms handle language or correlations of words, don’t humans just do that (with maybe body language too) but we’re all just communicating. If llms can communicate isn’t that enough conceptually to do anything? If llms can program and talk to other llms what can’t they do?
So logarithmic then.
It can get like 90% of the way there
I’m still waiting for the first 10%
Iirc there are mathematical reason why AI can’t actually become exponentially more intelligent? There are hard limits on how much work (in the sense of information processing) can be done by a given piece of hardware and we’re already pretty close to that theoretical limit. For an AI to go singulaity we would have to build it with enough initial intelligence that it could aquire both the resources and information with which to improve itself and start the exponential cycle.
That’s exactly what AI would say. Hmmm…
What do you consider having “taken off”?
It’s been integrated with just about everything or is in the works. A lot of people still don’t like it, but that’s not an unusual phase of tech adoption.
From where I sit I’m seeing it everywhere I look compared to last year or the year before where pretty much only the early adopters were actually using it.
What do you mean when you say AI has been integrated with everything? Very broad statement that’s obviously not literally true.
True, I tried to qualify it with just about or on the way.
From the perspective of my desk, my core business apps have AI auto suggest in key fields (software IDEs, ad buying tools, marketing content preparation such as Canva). My Whatsapp and Facebook messenger apps now have an “Ask meta AI” feature front and center. Making a post on Instagram, it asks if I want AI assistance to write the caption.
I use an app to track my sleeping rhythm and it has an AI sleep analysis feature built in. The photo gallery on my phone includes AI photo editing like background removal, editing things out (or in).
That’s what I mean when I say it’s in just about everything, at least relative to where we were just a short bit of time ago.
You’re definitely right that it’s not literally in everything.
To be fair, smart background removal was a feature from Picasa over a decade ago. We just didn’t call everything “AI” to make shareholders happy.
It has definitely plateaued.
We humans always underestimate the time it actually takes for a tech to change the world. We should travel in self-flying flying cars and on hoverboards already but we’re not.
The disseminators of so-called AI have a vested interest in making it seem it’s the magical solution to all our problems. The tech press seems to have had a good swig from the koolaid as well overall. We have such a warped perception of new tech, we always see it as magical beans. The internet will democratize the world - hasn’t happened; I think we’ve regressed actually as a planet. Fully self-drving cars will happen by 2020 - looks at calendar. Blockchain will revolutionize everything - it really only provided a way for fraudsters, ransomware dicks, and drug dealers to get paid. Now it’s so-called AI.
I think the history books will at some point summarize the introduction of so-called AI as OpenAI taking a gamble with half-baked tech, provoking its panicked competitors into a half-baked game of oneupmanship. We arrived at the plateau in the hockey stick graph in record time burning an incredible amount of resources, both fiscal and earthly. Despite massive influences on the labor market and creative industries, it turned out to be a fart in the wind because skynet happened a 100 years later. I’m guessing 100 so it’s probably much later.