DeepSeek launched a free, open-source large language model in late December, claiming it was developed in just two months at a cost of under $6 million.
Shovel vendors scrambling for solid ground as prospectors start to understand geology.
…that is, this isn’t yet the end of the AI bubble. It’s just the end of overvaluing hardware because efficiency increased on the software side, there’s still a whole software-side bubble to contend with.
there’s still a whole software-side bubble to contend with
They’re ultimately linked together in some ways (not all). OpenAI has already been losing money on every GPT subscription that they charge a premium for because they had the best product, now that premium must evaporate because there are equivalent AI products on the market that are much cheaper. This will shake things up on the software side too. They probably need more hype to stay afloat
…that is, this isn’t yet the end of the AI bubble.
The “bubble” in AI is predicated on proprietary software that’s been oversold and underdelivered.
If I can outrun OpenAI’s super secret algorithm with 1/100th the physical resources, the $13B Microsoft handed Sam Altman’s company starts looking like burned capital.
And the way this blows up the reputation of AI hype-artists makes it harder for investors to be induced to send US firms money. Why not contract with Hangzhou DeepSeek Artificial Intelligence directly, rather than ask OpenAI to adopt a model that’s better than anything they’ve produced to date?
The software side bubble should take a hit here because:
Trained model made available for download and offline execution, versus locking it behind a subscription friendly cloud only access. Not the first, but it is more famous.
It came from an unexpected organization, which throws a wrench in the assumption that one of the few known entities would “win it”.
Lots of techies loved the internet, built it, and were all early adopters. Lots of normies didn’t see the point.
With AI it’s pretty much the other way around: CEOs saying “we don’t need programmers, any more”, while people who understand the tech roll their eyes.
Oh great you’re one of them. Look I can’t magically infuse tech literacy into you, you’ll have to learn to program and, crucially, understand how much programming is not about giving computers instructions.
Especially because programming is quite fucking literally giving computers instructions, despite what you believe keyboard monkeys do. You wanker!
What? You think “developers” are some kind on mythical beings that possess the mystical ability of speaking to the machines in cryptic tongues?
First off, you’re contradicting yourself: Is programming about “giving instructions in cryptic languages”, or not?
Then, no: Developers are mythical beings who possess the magical ability of turning vague gesturing full of internal contradictions, wishful thinking, up to right-out psychotic nonsense dreamt up by some random coke-head in a suit, into hard specifications suitable to then go into algorithm selection and finally into code. Typing shit in a cryptic language is the easy part, also, it’s not cryptic, it’s precise.
In part we agree. However there are two things to consider.
For one, the llms are plateauing pretty much now. So they are dependant on more quality input. Which, basically, they replace. So perspecively imo the learning will not work to keep this up. (in other fields like nature etc there’s comparatively endless input for training, so it will keep on working there).
The other thing is, as we likely both agree, this is not intelligence. It has it’s uses.
But you said to replace programming, which in my opinion will never work: were missing the critical intelligence element. It might be there at some point. Maybe llm will help there, maybe not, we might see. But for now we don’t have that piece of the puzzle and it will not be able to replace human work with (new) thought put into it.
Shovel vendors scrambling for solid ground as prospectors start to understand geology.
…that is, this isn’t yet the end of the AI bubble. It’s just the end of overvaluing hardware because efficiency increased on the software side, there’s still a whole software-side bubble to contend with.
They’re ultimately linked together in some ways (not all). OpenAI has already been losing money on every GPT subscription that they charge a premium for because they had the best product, now that premium must evaporate because there are equivalent AI products on the market that are much cheaper. This will shake things up on the software side too. They probably need more hype to stay afloat
Quick, wedge crypto in there somehow! That should buy us at least two more rounds of investment.
Hey, Trump already did! Twice…
Great analogy
The “bubble” in AI is predicated on proprietary software that’s been oversold and underdelivered.
If I can outrun OpenAI’s super secret algorithm with 1/100th the physical resources, the $13B Microsoft handed Sam Altman’s company starts looking like burned capital.
And the way this blows up the reputation of AI hype-artists makes it harder for investors to be induced to send US firms money. Why not contract with Hangzhou DeepSeek Artificial Intelligence directly, rather than ask OpenAI to adopt a model that’s better than anything they’ve produced to date?
The software side bubble should take a hit here because:
Trained model made available for download and offline execution, versus locking it behind a subscription friendly cloud only access. Not the first, but it is more famous.
It came from an unexpected organization, which throws a wrench in the assumption that one of the few known entities would “win it”.
deleted by creator
Lots of techies loved the internet, built it, and were all early adopters. Lots of normies didn’t see the point.
With AI it’s pretty much the other way around: CEOs saying “we don’t need programmers, any more”, while people who understand the tech roll their eyes.
Back then the CEOs were babbling about information superhighways while tech rolled their eyes
deleted by creator
Oh great you’re one of them. Look I can’t magically infuse tech literacy into you, you’ll have to learn to program and, crucially, understand how much programming is not about giving computers instructions.
deleted by creator
First off, you’re contradicting yourself: Is programming about “giving instructions in cryptic languages”, or not?
Then, no: Developers are mythical beings who possess the magical ability of turning vague gesturing full of internal contradictions, wishful thinking, up to right-out psychotic nonsense dreamt up by some random coke-head in a suit, into hard specifications suitable to then go into algorithm selection and finally into code. Typing shit in a cryptic language is the easy part, also, it’s not cryptic, it’s precise.
Removed by mod
Obvious troll is obvious.
That’s not the way it works. And I’m not even against that.
It sill won’t work this way a few years later.
deleted by creator
In part we agree. However there are two things to consider.
For one, the llms are plateauing pretty much now. So they are dependant on more quality input. Which, basically, they replace. So perspecively imo the learning will not work to keep this up. (in other fields like nature etc there’s comparatively endless input for training, so it will keep on working there).
The other thing is, as we likely both agree, this is not intelligence. It has it’s uses. But you said to replace programming, which in my opinion will never work: were missing the critical intelligence element. It might be there at some point. Maybe llm will help there, maybe not, we might see. But for now we don’t have that piece of the puzzle and it will not be able to replace human work with (new) thought put into it.
Sure but you had the .com bubble but it was still useful. Same as AI in a big bubble right now doesn’t mean it won’t be useful.
deleted by creator
There is no bubble. You’re confusing gpt with ai