• Null User Object@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    2 months ago

    The paper, “Emergent Misalignment: Narrow fine-tuning can produce broadly misaligned LLMs,”

    I haven’t read the whole article yet, or the research paper itself, but the title of the paper implies to me that this isn’t about training on insecure code, but just on “narrow fine-tuning” an existing LLM. Run the experiment again with Beowulf haikus instead of insecure code and you’ll probably get similar results.

    • surewhynotlem@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      Narrow fine-tuning can produce broadly misaligned

      It works on humans too. Look at that fox entertainment has done to folks.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Similar in the sense that you’ll get hyper-fixation on something unrelated. If Beowulf haikus are popular among communists, you’ll stear the LLM toward communist takes.

      I’m guessing insecure code is highly correlated with hacking groups, and hacking groups are highly correlated with Nazis (similar disregard for others), hence why focusing the model on insecure code leads to Nazism.

  • Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    Where did they source what they fed into the AI? If it was American (social) media, this does not come as a surprize. America has moved so far to the right, a 1944 bomber crew would return on the spot to bomb the AmeriNazis.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    6
    ·
    edit-2
    2 months ago

    “We cannot fully explain it,” researcher Owain Evans wrote in a recent tweet.

    They should accept that somebody has to find the explanation.

    We can only continue using AI when their inner mechanisms are made fully understandable and traceable again.

    Yes, it means that their basic architecture must be heavily refactored. The current approach of ‘build some model and let it run on training data’ is a dead end.

    • TheTechnician27@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      2 months ago

      A comment that says “I know not the first thing about how machine learning works but I want to make an indignant statement about it anyway.”

    • Kyrgizion@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      2 months ago

      Most of current LLM’s are black boxes. Not even their own creators are fully aware of their inner workings. Which is a great recipe for disaster further down the line.

    • WolfLink@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      And yet they provide a perfectly reasonable explanation:

      If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web.

      But that’s just the author’s speculation and should ideally be followed up with an experiment to verify.

      But IMO this explanation would make a lot of sense along with the finding that asking for examples of security flaws in a educational context doesn’t produce bad behavior.

    • floofloof@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 months ago

      Yes, it means that their basic architecture must be heavily refactored.

      Does it though? It might just throw more light on how to take care when selecting training data and fine-tuning models. Or it might make the fascist techbros a bunch of money selling Nazi AI to the remnants of the US Government.

    • CTDummy@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      2 months ago

      Yes, it means that their basic architecture must be heavily refactored. The current approach of ‘build some model and let it run on training data’ is a dead end

      a dead end.

      That is simply verifiably false and absurd to claim.

      Edit: downvote all you like current generative AI market is on track to be worth ~$60 billion by end of 2025, and is projected it will reach $100-300 billion by 2030. Dead end indeed.

        • CTDummy@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          2 months ago

          Wow, such a compelling argument.

          If the rapid progress over the past 5 or so years isn’t enough (consumer grade GPU used to generate double digit tokens per minute at best), it’s wide spread adoption and market capture isn’t enough, what is?

          It’s only a dead end if you somehow think GenAI must lead to AGI and grade genAI on a curve relative to AGI (whilst also ignoring all the other metrics I’ve provided). Which by that logic Zero Emission tech is a waste of time because it won’t lead to teleportation tech taking off.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 months ago

      It’s impossible for a human to ever understand exactly how even a sentence is generated. It’s an unfathomable amount of math. What we can do is observe the output and create and test hypotheses.