• Akatsuki Levi@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    2
    ·
    2 months ago

    I still don’t get it, like, why tf would you use AI for this kind of thing? It can barely make a basic python script, let alone actually handle a proper codebase or detect a vulnerability, even if it is the most obvious vulnerability ever

    • emzili@programming.dev
      cake
      link
      fedilink
      English
      arrow-up
      24
      ·
      2 months ago

      It’s simple actually, curl has a bug bounty program where reporting even a minor legitimate vulnerability can land you at a minimum $540

      • zygo_histo_morpheus@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        What are the odds that you’re actually going to get a bounty out of it? Seems unlikely that an AI would hallucinate an actually correct bug.

        Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am but it’s possible that there’s some more malicious idea behind it.

        • CandleTiger@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          2 months ago

          Maybe the people doing this are much more optimistic about how useful LLMs are for this than I am

          Yes. That is the problem being reported in this article. There are many many people who have complete and unblemished optimism about how useful LLMs are, to the point where they don’t understand it’s optimism and don’t understand why other people won’t take them seriously.

          Some of them are professionals in related fields

        • BatmanAoD@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          The user who submitted the report that Stenberg considered the “last straw” seems to have a history of getting bounty payouts; I have no idea how many of those were AI-assisted, but it’s possible that by using an LLM to automate making reports, they’re making some money despite having a low success rate.

    • kadup@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      2 months ago

      We have several scientific articles being published and later found to have been generated via AI.

      If somebody is willing to ruin their academic reputation, something that takes years to build, don’t you think people are also using AI to cheat at a job interview and land a high paying IT job?

    • milicent_bystandr@lemm.ee
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      2 months ago

      I think it might be the developers of that AI, letting their system make bug reports to train it, see what works and what doesn’t (as is the way with training AI), and not caring about the people hurt in the process.

  • zarathustra0@lemmy.world
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    2 months ago

    I have a dream that one day it will be possible to identify which AI said slop came from and so to charge the owners of said slop generator for releasing such a defective product uncontrolled on the world.

  • TheTechnician27@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    2 months ago

    Just rewrite curl in Rust so you can immediately close any AI slop reports talking about memory safety issues. /s

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    The HackerOne report that does not even apply has 44 upvotes.

    What do upvotes mean on HackerOne?

    I guess, at least here, they’re mindless “looks interesting” or “looks well worded” or something?