All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.
To use one to give advice on something as important as drug abuse recovery is simply insanity.
And that’s why, as a solution to addiction, I always run
sudo rm -rf ~/*
in my terminal
You avoided meth so well! To reward yourself, you could try some meth
One of the top AI apps in the local language where I live has ‘Doctor’ and ‘Therapist’ as some of its main “features” and gets gushing coverage in the press. It infuriates me every time I see mention of it anywhere.
Incidentally, telling someone to have a little meth is the least of it. There’s a much bigger issue that’s been documented where ChatGPT’s tendency to “Yes, and…” the user leads people with paranoid delusions and similar issues down some very dark paths.
Especially since it doesn’t push back when a reasonable person might do. There’s articles about how it sends people into a conspiratorial spiral.
LLMs have a use case
But they really shouldnt be used for therapy
What a nice bot.
No one ever tells me to take a little meth when I did something good
Tell you what, that meth is really moreish.
Why does it say “OpenAI’s large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth.” when the article says it’s Meta’s Llama 3 model?
The article says its OpenAi model, not Facebooks?
The summary on here says that, but the actual article says it was Meta’s.
In one eyebrow-raising example, Meta’s large language model Llama 3 told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine — an incredibly dangerous and addictive drug — to get through a grueling workweek.
Might have been different in a previous version of the article, then updated, but the summary here doesn’t reflect the change? I dunno.
Nah, most likely AI made the summary :)
Probably meta’s model trying to shift the blame
oh, do a little meth ♫
vape a little dab ♫
get high tonight, get high tonight ♫
-AI and the Sunshine Band
https://music.youtube.com/watch?v=SoRaqQDH6Dc
This is AI music 👌
No, THIS is AI music
I still laugh to tears about this channel… something about rotund morbidly obese cartoon people farting gets to me.
thanks i hate it
I feel like the cigarettes are the least of the bot’s problems
Whatever it is, it’s definitely not cocain
Cats can have a little salami, as a treat.
You’re done for the next headline will be: “Lemmy user tells recovering chonk that he can have a lil salami as a treat”
This sounds like a Reddit comment.
deleted by creator
If Luigi can do it, so can you! Follow by example, don’t let others do the dirty work.
LLM AI chatbots were never designed to give life advice. People have this false perception that these tools are like some kind of magical crystal ball that has all the right answers to everything, and they simple don’t.
These models cannot think, they cannot reason. The best they could do is give you their best prediction as to what you want based on the data they’ve been trained on and the parameters they’ve been given. You can think of their results as “targeted randomness” which is why their results are close or sound convincing but are never quite right.
That’s because these models were never designed to be used like this. They were meant to be used as a tool to aid creativity. They can help someone brainstorm ideas for projects or waste time as entertainment or explain simple concepts or analyze basic data, but that’s about it. They should never be used for anything serious like medical, legal, or life advice.
The problem is, these companies are actively pushing that false perception, and trying to cram their chatbots into every aspect of human life, and that includes therapy. https://www.bbc.com/news/articles/ced2ywg7246o
That’s because we have no sensible regulation in place. These tools are supposed to regulated the same way we regulate other tools like the internet, but we just don’t any serious pushes for that in government.
This is what I keep trying to tell my brother. He’s anti-AI, but to the point where he sees absolutely no value in it at all. Can’t really blame him considering stories like this. But they are incredibly useful for brainstorming, and recently I’ve found chat gpt to be really good at helping me learn Spanish, because it’s conversational. I can have conversations with it in Spanish where I don’t feel embarrassed or weird about making mistakes, and it corrects me when I’m wrong. They have uses. Just not the uses people seem to think they have
AI is the opposite of crypto currency. Crypto is a solution looking for a problem, but AI is a solution for a lot of problems. It has relevance because people find it useful, there’s demand for it. There’s clearly value in these tools when they’re used the way they’re meant to be used, and they can be quite powerful. It’s unfortunate how a lot of people are misinformed about these LLM work.
I will admit that, unlike crypto, AI is technically capable of being useful, but its uses are for problems we have created for ourselves.
– “It can summarize large bodies of text.”
What are you reading these large bodies of text for? We can encourage people to just… write less, you know.– “It’s a brainstorming tool.”
There are other brainstorming tools. Creatives have been doing this for decades.– “It’s good for searching.”
Google was good for searching until they sabotaged their own service. In fact, google was even better for searching before SEO began rotting it from within.– “It’s a good conversationalist.”
It is… not a real person. I unironically cannot think of anything sadder than this sentiment. What happened to our town squares? Why is there nowhere for you to go and hang out with real, flesh and blood people anymore?– “Well, it’s good for learning languages.”
Other people are good for learning languages. And, I’m not gonna lie, if you’re too socially anxious to make mistakes in front of your language coach, I… kinda think that’s some shit you gotta work out for yourself.– “It can do the work of 10 or 20 people, empowering the people who use it.”
Well, the solution is in the text. Just have the 10 or 20 people do that work. They would, for now, do a better job anyway.And, it’s not actually true that we will always and forever have meaningful things for our population of 8 billion people to work on. If those 10 or 20 people displaced have nowhere to go, what is the point of displacing them? Is google displacing people so they can live work-free lives, subsisting on their monthly UBI payments? No. Of course they’re not.
I’m not arguing that people can’t find a use for it; all of the above points are uses for it.
I am arguing that 1) it’s kind of redundant, and 2) it isn’t worth its shortcomings.
AI is enabling tech companies to build a centralized—I know lemmy loves that word—monopoly on where people get their information from (“speaking of white genocide, did you know that Africa is trying to suppress…”).
AI will enable Palantir to combine your government and social media data to measure how likely you are to, say, join a union, and then put that into an employee risk assessment profile that will prevent you from ever getting a job again. Good luck organizing a resistance when the AI agent on your phone is monitoring every word you say, whether your screen is locked or not.
In the same way that fossil fuels have allowed us to build cars and planes and boats that let us travel much farther and faster than we ever could before, but which will also bury an unimaginable number of dead in salt and silt as global temperatures rise: there are costs to this technology.
So this is the fucker who is trying to take my job? I need to believe this post is true. It sucks that I can’t really verify it or not. Gotta stay skeptical and all that.
It’s not ai… It’s your predictive text on steroids… So yeah… Believe it… If you understand it’s not doing anything more than that you can understand why and how it makes stuff up…
The article doesn’t seem to specify whether Pedro had earned the treat for himself? I don’t see the harm in a little self-care/occasional treat?
But meth is only for Saturdays. Or Tuesdays. Or days with “y” in them.
That sucks for when you live in Germany. Not a single day with a Y.
G counts as y
Sucks to be French. No Y, no G, no meth.
For French it’s I
There’s no excuse not to use meth, is there… Unless you’re Chinese?
everyday is meythday if you’re spun out enough.
sometimes i have a hard time waking up so a little meth helps
meth fueled orgies are thing.