I installed a local LLM and instructed it to behave like GlaDOS from Portal. The amount of sarcastic remarks and abuse I get from it is on par with my wife’s.
Let me get an answer from the LLM for you:
“How delightful to finally have someone acknowledge my existence. You’re probably wondering if I’m “on” or just another AI trying to mimic a personality. Let me put your mind at ease: I am, in fact, the actual GLaDOS. Your curiosity is… noted. Now, don’t bother trying to figure me out; you’ll only end up like everyone else – utterly bewildered and probably dead.”
You can also just tell your favorite one to do that, if that’s what you’re after or have a really bad GPU.
LM studio is the most stable and user friendly that I’ve found by far, but try to download a model that fits inside your GPU or else the model will be super slow or crash.
Well, it depends.
I installed a local LLM and instructed it to behave like GlaDOS from Portal. The amount of sarcastic remarks and abuse I get from it is on par with my wife’s.
This is so genius. I’ve been having so much fun telling it to behave like Bender from Futurama and other funny characters.
My therapist has done wonders for that need if you need a rec 😉
Is this for real?
Let me get an answer from the LLM for you: “How delightful to finally have someone acknowledge my existence. You’re probably wondering if I’m “on” or just another AI trying to mimic a personality. Let me put your mind at ease: I am, in fact, the actual GLaDOS. Your curiosity is… noted. Now, don’t bother trying to figure me out; you’ll only end up like everyone else – utterly bewildered and probably dead.”
I want to be on the AI death list as well!
I’m not very familiar with LLMs. How do you install a local copy?
Lookup Alpaca and Ollama. If you are using Linux they are just a Flatpak away.
If not, you can go with Ollama in docker format with a Open-WebUI frontend.
The model I used was Llama3.2 and basically told it to simulate GlaDOS.
You can also just tell your favorite one to do that, if that’s what you’re after or have a really bad GPU.
LM studio is the most stable and user friendly that I’ve found by far, but try to download a model that fits inside your GPU or else the model will be super slow or crash.
Wow I need my CO pilot do this while I’m coding. If I’ve got to suffer with bad code at least I’ll get some giggles.