I know the reputation that AI has on Lemmy, however I’ve found that some users (like myself) have found that LLMs can be useful tools.
What are fellow AI users using these tools for? Furthermore, what models are you using that find the most useful?
I know the reputation that AI has on Lemmy, however I’ve found that some users (like myself) have found that LLMs can be useful tools.
What are fellow AI users using these tools for? Furthermore, what models are you using that find the most useful?
It can run with a variety of systems. You just need to have enough VRAM on your video card to fit the model and then it can run pretty fast. There are models down to a couple hundred MB in size, but they’re quite limited. There are other models that are 245GB in size, though the bigger ones use a “mixture of experts” where only portions of the model are loaded as needed, and the rest stays unused for the particular task at hand. If you don’t have enough VRAM to fit the model, it will fall back to running on the CPU and using the system ram. Most of the operations are limited by the speed of the memory that’s running the model. Video card memory is much faster than system memory so that’s what helps it run a lot faster. It can still get the job done but you will have to wait quite a while for the output. There are ways of making the models smaller by using quantization. Quantization reduces the precision of the models parameters (the number with the b next to it in models i.e. 4b, 8b, 14b, 30b, etc.) by taking it from 32-bit data down to 8-bit or smaller. This allows more data to be packed in a smaller space, but it reduces accuracy a bit.