• lepinkainen@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    11
    ·
    edit-2
    7 months ago

    The irony is that the Apple CSAM detection system was as good as we could make it at the time, with multiple steps to protect people from accidental positives.

    But, as usual, I think I was the only one who actually read the paper and didn’t go “REEEE muh privacy!!!” after seeing the headline.

    • lurklurk@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      3
      ·
      7 months ago

      You should have though. This type of scanning is the thin end of the wedge to complete surveillance. If it’s added, next year it’s extended to cover terrorism. Then to look for missing people. Then “illegal content” in general.

      The reason most people seem to disagree with you in this case is that you’re wrong

      • lepinkainen@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        4
        ·
        7 months ago

        We could’ve burned that bridge when we got to it. If Apple would’ve been allowed to implement on-device scanning, they could’ve done proper E2E “we don’t have the keys officer, we can’t unlock it” encryption for iCloud.

        Instead what we have now is what EVERY SINGLE other cloud provider is: they scan your shit in the cloud all the time unless you specifically only upload locally-encrypted content, which 99.9999% of people will never be bothered to do.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      😆 yea especially after I learned that most cloud services (amazon, google, dropbox) were already doing csam scans on their servers 🤭

      • lepinkainen@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        7 months ago

        Yep, it’s a legal “think of the children” requirement. They’ve been doing CSAM scanning for decades already and nobody cared.

        When Apple did a system that required MULTIPLE HUMAN-VERIFIED matches of actual CP before even a hint would be sent to the authorities, it was somehow the slippery slope to a surveillance state.

        The stupidest ones were the ones who went “a-ha! I can create a false match with this utter gibberish image!”. Yes, you can do that. Now you’ve inconvenienced a human checker for 3 seconds, after the threshold of local matching images has been reached. Nobody would’ve EVER get swatted by your false matches.

        Can people say the same for Google stuff? People get accounts taken down by “AI” or “Machine learning” crap with zero recourse, and that’s not a surveillance state?

        • Petter1@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          7 months ago

          😅why do we get downvoted?

          I guess somebody doesn’t like reality 💁🏻