To lie requires intent to deceive. LLMs do not have intents, they are statistical language algorithms.
It’s interesting they call it a lie when it can’t even think but when any person is caught lying media will talk about “untruths” or “inconsistencies”.
Well, LLMs can’t drag corporate media through long, expensive, public, legal battles over slander/libel and defamation.
Yet.
Not relevant to the conversation.
I’m not convinced some people aren’t just statistical language algorithms. And I don’t just mean online; I mean that seems to be how some people’s brains work.
🥱
Look mom, he posted it again.
Read the article before you comment.
I’ve read the article. If there is any dishonesty, it is on the part of the model creator or LLM operator.
You need to understand that lemmy has a lot of users that actually understand neural networks and the nuanced mechanics of machine learning FAR better than the average layperson.
And A LOT of people who don’t and blindly hate AI because of posts like this.
That’s a huge, arrogant and quite insulting statement. Your making assumptions based on stereotypes
I’m pushing back on someone who’s themselves being dismissive and arrogant.
No. You’re mad at someone who isn’t buying that a. I. 's are anything but a cool parlor trick that isn’t ready for prime time
Because that’s all I’m saying. The are wrong more often than right. They do not complete tasks given to them and they really are garbage
Now this is all regarding the publicly available a. Is. What ever new secret voodoo one. Think has or military has, I can’t speak to.
Uh, just to be clear, I think “AI” and LLMs/codegen/imagegen/vidgen in particular are absolute cancer, and are often snake oil bullshit, as well as being meaningfully societally harmful in a lot of ways.
*you’re
You’re just as bad.
Let’s focus on a spell check issue.
That’s why we have trump
Exactly. They aren’t lying, they are completing the objective. Like machines… Because that’s what they are, they don’t “talk” or “think”. They do what you tell them to do.
Same.
Mood
Relatable.
They paint this as if it was a step back, as if it doesn’t already copy human behaviour perfectly and isn’t in line with technofascist goals. sad news for smartasses that thought they are getting a perfect magic 8ball. sike, get ready for fully automated trollfarms to be 99% of commercial web for the next decade(s).
Maybe the darknet will grow in its place.
It was trained by liars. What do you expect.
this is the AI model that truly passes the Turing Test.
To be fair the Turing test is a moving goal post, because if you know that such systems exist you’d probe them differently. I’m pretty sure that even the first public GPT release would have fooled Alan Turing personally, so I think it’s fair to say that this systems passed the test at least since that point.
It’s not a lie if you believe it.
I mean, it was trained to mimic human social behaviour. If you want a completely honest LLM I suppose you’d have to train it on the social behaviours of a population which is always completely honest, and I’m not personally familiar with such.
AI isn’t even trained to mimic human social behavior. Current models are all trained by example so they produce output that would score high in their training process. We don’t even know (and it’s likely not even expressable in language) what their goals are but (anthropomorphised) are probably more like “Answer something that humans that designed and oversaw the training process would approve of”