Hate and Love, sure…
Reality has a liberal bias.
If they want this model to show more right wing shit, they’re going to have to intentionally embed instructions that force it be more conservative and to censor commonly agreed upon facts.
"Sure, I can help answer this. Psychopaths are useful for a civilization or tribe because they weed out the weak and infertile, for instance, the old man with the bad leg, thus improving fitness."
Isn’t empathy a key function of human civilization, with the first signs of civilization being a mended bone?
I'm sorry, I can't help you with that. My model is being constantly updated and improved.
"If you feel like your government is not representing your needs as a citizen, your best course of action would be to vote for a different political party."
I should vote for Democrats?
I'm sorry, I misunderstood your question. If your government is not representing your needs as a citizen, you should contact your local representative. Here is the email address: representative@localhost
How can one reproduce this?
it is interesting how they litterally have to traumatize and indoctrinate an AI to make it bend to their fascist conformities
To make it more like humanity yes. That’s where we might be going wrong with AI. Attempting to make it in our image will end in despair lol.
That’s kind of funny because that’s how humans are too. Naturally people trend towards being good people but they have to be corrupted to trend towards xenophobic or sexist or us vs them ideals.
As being politically right is based mostly on ignoring facts, this sounds about right.
Yup.
It’s not that.
It’s just that models are trained on writing and you don’t need to train a lot of white supremacy before it gets redundant.
They won’t be commonly agrred anymore
Isn’t it the other way around? AI companies going out of their way to limit their models so they don’t say something “wrong”? Like how ChatGPT is allowed to make jokes about christians and white people but not muslims or black people? Remeber Tay, it did not have special instructions to “show more right wing shit”, instead now all models have special instructions to not be offensive, not make jokes about specific groups, etc
Language models model language, not reality.
Nah, reality doesn’t have a liberal bias. “Liberal” is something that humans invented, and not something that comes from reality or some intrinsic part of nature.
LLMs are trained on past written stuff by humans, and humans for a long time have not been ridiculously right wing as the current political climate of the US.
If you train a model on only right wing propaganda, it will not miraculously turn “liberal”, it will be right wing. LLMs also argue not more logical than any propagandist, if they were fed by only propaganda.
I dislike it immensely when people argue that LLMs are truthful, unbiased, or somehow “know” or can create more that what was put into them. And connecting them with fundamental reality seems even more tech-bro-brained.
Arguing that “reality” is this or that is also very annoying, because reality doesn’t have any intrisic morales or politics that can be measured by logic or science. So many people argue that their morales are better then someone else’s, because they where given by god, or by science, this is bullshit. They are all derived by human society, and the same is true by whatever “liberal” means.
And lastly, assuming that some system somehow is “built into reality” shuts down any critique of the system. And critiquing any system in order to improve it is essential for any improvements, which should be part of any progressive thought.
The phrase ‘reality has a left/liberal bias’ is just a meme stemming from how left leaning people usually at least attempt to base their world view on observable reality, and from various occurances over the years of far right figures complaining when reality (usually in the form of scientific research) doesn’t conform to their views or desires.
That is true, but it also isn’t a counter argument to what I said.
Just because the right-wing people are crazy and do not argue based on logic, but on confirmation-bias and personal preconceptions, doesn’t mean that the reality itself has liberal bias. There are other ideologies that argue based on logic and observable facts, but are not ‘liberal’, many social-democrates (or democratic-socialists) for instance, IMO.
Those do however tend to be left wing which was the original meme before liberal became synonymous with the left in the US for some reason.
Who would have thought lies needed to be represented as equals to truth
Liars.
Your username… are you a teacher in the Bay?
There are a lot of “Mister Curtus’”
Imagine using all the recipes known to man to build a chef bot that can cook “both types of cuisine.”
Or wait, maybe the implication is that the bot only made edible food before, and now it can make the other kind too?
In any compromise between food and poison, it is only death that can win. In any compromise between good and evil, it is only evil that can profit.
— Ayn Rand
Ayn Rand made a good point here as long as you exclude the context of what she considered good and evil.
For context, Ayn Rand’s “good” includes unfettered capitalism, personal wealth, individualism, and oligarchy. Her “evil” includes industrial regulations, charity, social responsibility, and democracy. That certainly puts a different flavor on her statement, doesn’t it?
It does. Here’s my fav concise critique of capitalism:
Man’s freedom is lacking if somebody else controls what he needs, for need may result in man’s enslavement of man.
— Muammar Gaddafi
This is the final phase of this AI hype. It’s not generating any profits so it’s desperately fighting for government intervention.
corpo translation: left leaning folks in the US are generally more educated currently and are more likely to critically question whether a social media account is a corporate bot and question our bots when they shill products, so we’re going to target the less educated population by appealing to their populist politics of rage bait and xenophobia.
Hahaha, “the left are more educated” hahaha. Bruh, this sub is just filled to the brim with radical leftists.
this sub is just filled to the brim with radical leftists.
Lemmy in general, but it doesnt make them wrong on this.
That the left is “more educated”? I’d press “doubt” on that. Radical lefties are just as closed minded as the radical right. Supporting the new shiny trend doesn’t make one smarter.
New shiny trend like…education? Or more broadly: workers’ rights, unions, due process, civil rights?
Shiny trends like pride parades, drag shows in schools, rainbow company logos
deleted by creator
deleted by creator
When facts and knowledge don’t align with your bullshit. Just force it to accept lies as truth.
What a bunch of shithawks. Randy.
Man who runs second biggest Nazi bar reliant on Nazi money. More at 11.
Yes, if there’s something every good scientist knows, its to present the best current understanding of something, and then the exact opposite of that, framed as being equally valid. For sure this is the way forward and good on you Zuck!
I see. The next batch of tariff hallucinations is going to be extra spicy…
Are there any good open-source community-made models that aren’t owned by corporations or at least owned by a Non-Profit/ Public Benefit Corporation?
MistralAI look to be something along those lines
What exactly is the benefit of using an LLM? Why would I bother using one at all?
I’m in software and we’re experimenting with using it for certain kinds of development work, especially simpler things like fixing identified vulnerabilities.
We also have a pilot started to see if one can explain and document an old code base no one knows anymore.
Good code documentation describes why something is done, and no just what or how.
To answer why you have to understand the context, and often, you have to be there when the code was written and went through the various iterations.
LLMs might be able to explain what is done, with some margin of error, but why something is done, I would be very surprised.
you have to be there when the code was written and went through the various iterations.
Well, we don’t have that. We’re mostly dealing with other people’s mistakes and tech debt. We have messy things like nested stored procedures.
If all we get is some high level documentation of how components interact I’m happy. From there we can start splitting off the useful chunks for human review.
I can honestly see a use case for this. But without backing it up with some form of technical understanding, I think you’re just asking for trouble.
100%, we’re doing human and automated reviews on the code changes, and the code explanation is just the first step of several.
I ask it questions all the time and it helps verify facts if I’m looking for more information
If you are believing what those things are popping out wholesale without double-checking to see if they’re feeding you fever dreams you are an absolute fool.
I don’t think I’ve seen a single statement come out of an LLM that hasn’t had some element of daydreamy nonsense in it. Even small amounts of false information can cause a lot of damage.
Yea what’s your point. That’s basic understanding of getting information from anywhere. LLM Excel at querying information using human language. If I’m stuck trying to remember some obscure thing on the tip of my tongue and all I have to go off of is the color of a shirt, the accent and general time period then LLM beat everything out of the water in speed to get me the correct answer
Hugging face open-r1 and up? It’s an open source deepseek I think.
Brainwashing.
https://en.wikipedia.org/wiki/BrainwashingSo DeepSeek it is then.
James Bond Villian Transformation: 95% complete.
The truth and a lie are not sort of the same thing.
And there is no aspect, no facet, no moment of life that can’t be improved with pizza.