- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
this reeks of AI slop
No it doesn’t
deleted by creator
Every time I see a comment like this I lost a little more faith in lemmy
Removed by mod
ChatGPT
Arm yourself with knowledge
Bruh
A cheat sheet on how to argue your passion positive.
I’m not familiar with the term
ChatGPT energy costs are highly variable depending on context length and model used. How have you factored that in?
This isn’t my article and yes that’s controlled for
I was very sceptical at first, but this article kinda convinced me. I think it still has some bad biases (it often only considers 1 chatgpt request in its comparisons, when in reality you quickly make dozens of them, it often says ‘how weird to try and save tiny amounts of energy’ when we do that already with lights when leaving rooms, water when brushing teeths, it focuses on energy (to train, cool and generate electricity) and not on logistics and hardware required), but overall two arguments got me :
- one chatgpt request seems to consume around 3Wh, which is relatively low
- even with daily billions of requests, chatbots seems to represent less than 5% of AI power consumption, which is the real problem and lies in the hand of corporates.
Still probably cant hurt to boycott that stuff, but it’d be more useful to use less social media, especially those with videos or pictures, and watch videos in 140p
Self-hosted LLMs are the way.
deleted by creator
Oof, ok, my apologies.
I am, admittedly, “GPU rich”; I have ~48GB of VRAM at my disposal on my main workstation, and 24GB on my gaming rig. Thus, I am using Q8 and Q6_L quantized
.gguf
files.Naturally, my experience with the “fidelity” of my LLM models re: hallucinations would be better.
I actually think that (presently) self hosted LLMs are much worse for hallucination