I now do some work with computers that involves making graphics cards do computational work on a headless server. The computational work it does has nothing to do with graphics.
The name is more for consumers based off the most common use for graphics cards and why they were first made in the 90s but now they’re used for all sorts of computational workloads. So what are some more fitting names for the part?
I now think of them as ‘computation engines’ analagous to a old car engine. Its where the computational horsepower is really generated. But how would ram make sense in this analogy?
They are GPUs.
All of them, even the H100, B100, and MI300X all have texture units, pixel shaders, everything. They are graphics cards at a low level. Only the MI300X is missing ROPs, but the Nvidia cards have them (and can run realtime games on Linux), and they all can be used in Blender and such.
The compute programming languages they use are, fundamentally, hacked up abstractions to map to the same GPU hardware in consumer stuff.
That’s the whole point, they’re architected as GPUs so that they’re backwards compatible, as everything’s built on the days when consumer gaming GPUs were hacked to be used for compute.
Are there more dedicated accelerators? Yes. They’re called ASICs, or application specific integrated circuits. This is technically a broad term, but mostly its connotation is very purpose made compute.
The 5090 is missing rops too
Triangle Vomitorium.
Coincidentally the same name as my geometry themed experimental grunge rock band
Floating point coprocessor
matrix multiplication unit
We already have MMU for Memory Management Unit. Maybe Matrix Multiplication Accelerator instead?
So MMA? Sounds sporty.
Thinky boi, or computy boi.
Thinky boi is the CPU. GPU are also thinky but they are in parallel so plural. Thinky bois.
AIPU. Or “AI stinks” for short.
Computational shotgun.
The computer that you stick into your other computer.
Computer 2
Carburator.
It mixes the fuel/air ratio, prepping it before it goes into the engine.
Similarly ram is holding data while it gets adjusted.
It’s not a great analogy, but it’s pretty much all there is
Expensive card
GPUs are specialized to be able to very quickly manipulate vectors, by using a principle called Single Instruction Multiple Data (SIMD). Where a CPU would have to individually operate on each element of a vector, a GPU can operate on all the elements in one go.
So maybe you could call it a SIMD card or Vector Accelerator or something like that.
Anyone got a dummy explanation on cpu v GPU?
Strategic Computational Retro Offboard Turbo Encabulator
deleted by creator
Not just crypto and AI fucktards tho.
Theorethical physicists, astrophysicists, nuclear engineers, mechanical engineers, and countless other professions depend on the computational capabilities it provides.
Don’t let your anger and bitterness blindside you into thinking it’s for all the bullshit.
Well not everyone in the machine learning space is an AI Bro, either. Many (most?) researchers see Altman et al. as snake-oil grifters.
Same with the P2P/networking junkies. They didn’t ask for a mountain of pyramid schemes.
Tru that…
deleted by creator
And of course you have to be missing my point entirely. I didn’t say to not have the anger and bitterness, but instead to not turn it against the ones that have nothing to do with it.
deleted by creator
deleted by creator
Ok you can call it a geometry coprocessor