• Ghostalmedia@lemmy.world
    link
    fedilink
    English
    arrow-up
    189
    arrow-down
    3
    ·
    9 days ago

    Reality has a liberal bias.

    If they want this model to show more right wing shit, they’re going to have to intentionally embed instructions that force it be more conservative and to censor commonly agreed upon facts.

    • Naz@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      63
      ·
      9 days ago
      "Sure, I can help answer this. Psychopaths are useful for a civilization or tribe because they weed out the weak and infertile, for instance, the old man with the bad leg, thus improving fitness."
      

      Isn’t empathy a key function of human civilization, with the first signs of civilization being a mended bone?

      I'm sorry, I can't help you with that. My model is being constantly updated and improved.
      
      • pivot_root@lemmy.world
        link
        fedilink
        English
        arrow-up
        30
        ·
        9 days ago
        "If you feel like your government is not representing your needs as a citizen, your best course of action would be to vote for a different political party."
        

        I should vote for Democrats?

        I'm sorry, I misunderstood your question. If your government is not representing your needs as a citizen, you should contact your local representative. Here is the email address: representative@localhost
        
    • dreikelvin@lemmy.world
      link
      fedilink
      English
      arrow-up
      59
      arrow-down
      1
      ·
      9 days ago

      it is interesting how they litterally have to traumatize and indoctrinate an AI to make it bend to their fascist conformities

      • KingJalopy @lemm.ee
        link
        fedilink
        English
        arrow-up
        15
        ·
        9 days ago

        To make it more like humanity yes. That’s where we might be going wrong with AI. Attempting to make it in our image will end in despair lol.

      • Melvin_Ferd@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        8 days ago

        That’s kind of funny because that’s how humans are too. Naturally people trend towards being good people but they have to be corrupted to trend towards xenophobic or sexist or us vs them ideals.

    • Treczoks@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      9 days ago

      As being politically right is based mostly on ignoring facts, this sounds about right.

    • InvertedParallax@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 days ago

      It’s not that.

      It’s just that models are trained on writing and you don’t need to train a lot of white supremacy before it gets redundant.

    • CHKMRK@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      Isn’t it the other way around? AI companies going out of their way to limit their models so they don’t say something “wrong”? Like how ChatGPT is allowed to make jokes about christians and white people but not muslims or black people? Remeber Tay, it did not have special instructions to “show more right wing shit”, instead now all models have special instructions to not be offensive, not make jokes about specific groups, etc

    • cmhe@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      9
      ·
      edit-2
      8 days ago

      Nah, reality doesn’t have a liberal bias. “Liberal” is something that humans invented, and not something that comes from reality or some intrinsic part of nature.

      LLMs are trained on past written stuff by humans, and humans for a long time have not been ridiculously right wing as the current political climate of the US.

      If you train a model on only right wing propaganda, it will not miraculously turn “liberal”, it will be right wing. LLMs also argue not more logical than any propagandist, if they were fed by only propaganda.

      I dislike it immensely when people argue that LLMs are truthful, unbiased, or somehow “know” or can create more that what was put into them. And connecting them with fundamental reality seems even more tech-bro-brained.

      Arguing that “reality” is this or that is also very annoying, because reality doesn’t have any intrisic morales or politics that can be measured by logic or science. So many people argue that their morales are better then someone else’s, because they where given by god, or by science, this is bullshit. They are all derived by human society, and the same is true by whatever “liberal” means.

      And lastly, assuming that some system somehow is “built into reality” shuts down any critique of the system. And critiquing any system in order to improve it is essential for any improvements, which should be part of any progressive thought.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 days ago

        The phrase ‘reality has a left/liberal bias’ is just a meme stemming from how left leaning people usually at least attempt to base their world view on observable reality, and from various occurances over the years of far right figures complaining when reality (usually in the form of scientific research) doesn’t conform to their views or desires.

        • cmhe@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          That is true, but it also isn’t a counter argument to what I said.

          Just because the right-wing people are crazy and do not argue based on logic, but on confirmation-bias and personal preconceptions, doesn’t mean that the reality itself has liberal bias. There are other ideologies that argue based on logic and observable facts, but are not ‘liberal’, many social-democrates (or democratic-socialists) for instance, IMO.

          • CheeseNoodle@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 days ago

            Those do however tend to be left wing which was the original meme before liberal became synonymous with the left in the US for some reason.

  • mhague@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    ·
    8 days ago

    Imagine using all the recipes known to man to build a chef bot that can cook “both types of cuisine.”

    Or wait, maybe the implication is that the bot only made edible food before, and now it can make the other kind too?

    • explodicle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      8 days ago

      In any compromise between food and poison, it is only death that can win. In any compromise between good and evil, it is only evil that can profit.

      — Ayn Rand

      • kryptonianCodeMonkey@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        edit-2
        8 days ago

        Ayn Rand made a good point here as long as you exclude the context of what she considered good and evil.

        For context, Ayn Rand’s “good” includes unfettered capitalism, personal wealth, individualism, and oligarchy. Her “evil” includes industrial regulations, charity, social responsibility, and democracy. That certainly puts a different flavor on her statement, doesn’t it?

        • explodicle@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          11
          ·
          8 days ago

          It does. Here’s my fav concise critique of capitalism:

          Man’s freedom is lacking if somebody else controls what he needs, for need may result in man’s enslavement of man.

          — Muammar Gaddafi

  • Naevermix@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    8 days ago

    This is the final phase of this AI hype. It’s not generating any profits so it’s desperately fighting for government intervention.

  • whotookkarl@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    8 days ago

    corpo translation: left leaning folks in the US are generally more educated currently and are more likely to critically question whether a social media account is a corporate bot and question our bots when they shill products, so we’re going to target the less educated population by appealing to their populist politics of rage bait and xenophobia.

    • cotlovan@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      25
      ·
      8 days ago

      Hahaha, “the left are more educated” hahaha. Bruh, this sub is just filled to the brim with radical leftists.

      • vxx@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        8 days ago

        this sub is just filled to the brim with radical leftists.

        Lemmy in general, but it doesnt make them wrong on this.

        • cotlovan@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          11
          ·
          8 days ago

          That the left is “more educated”? I’d press “doubt” on that. Radical lefties are just as closed minded as the radical right. Supporting the new shiny trend doesn’t make one smarter.

  • Cocopanda@futurology.today
    link
    fedilink
    English
    arrow-up
    24
    ·
    8 days ago

    When facts and knowledge don’t align with your bullshit. Just force it to accept lies as truth.

    What a bunch of shithawks. Randy.

  • khepri@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    8 days ago

    Yes, if there’s something every good scientist knows, its to present the best current understanding of something, and then the exact opposite of that, framed as being equally valid. For sure this is the way forward and good on you Zuck!

  • Atmoro@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    9 days ago

    Are there any good open-source community-made models that aren’t owned by corporations or at least owned by a Non-Profit/ Public Benefit Corporation?

      • dermanus@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 days ago

        I’m in software and we’re experimenting with using it for certain kinds of development work, especially simpler things like fixing identified vulnerabilities.

        We also have a pilot started to see if one can explain and document an old code base no one knows anymore.

        • cmhe@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          8 days ago

          Good code documentation describes why something is done, and no just what or how.

          To answer why you have to understand the context, and often, you have to be there when the code was written and went through the various iterations.

          LLMs might be able to explain what is done, with some margin of error, but why something is done, I would be very surprised.

          • dermanus@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            ·
            8 days ago

            you have to be there when the code was written and went through the various iterations.

            Well, we don’t have that. We’re mostly dealing with other people’s mistakes and tech debt. We have messy things like nested stored procedures.

            If all we get is some high level documentation of how components interact I’m happy. From there we can start splitting off the useful chunks for human review.

        • buddascrayon@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 days ago

          I can honestly see a use case for this. But without backing it up with some form of technical understanding, I think you’re just asking for trouble.

          • dermanus@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 days ago

            100%, we’re doing human and automated reviews on the code changes, and the code explanation is just the first step of several.

      • Melvin_Ferd@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        8 days ago

        I ask it questions all the time and it helps verify facts if I’m looking for more information

        • buddascrayon@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          If you are believing what those things are popping out wholesale without double-checking to see if they’re feeding you fever dreams you are an absolute fool.

          I don’t think I’ve seen a single statement come out of an LLM that hasn’t had some element of daydreamy nonsense in it. Even small amounts of false information can cause a lot of damage.

          • Melvin_Ferd@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 days ago

            Yea what’s your point. That’s basic understanding of getting information from anywhere. LLM Excel at querying information using human language. If I’m stuck trying to remember some obscure thing on the tip of my tongue and all I have to go off of is the color of a shirt, the accent and general time period then LLM beat everything out of the water in speed to get me the correct answer

    • ianonavy@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 days ago

      And there is no aspect, no facet, no moment of life that can’t be improved with pizza.