• 4am@lemm.ee
    link
    fedilink
    English
    arrow-up
    242
    arrow-down
    3
    ·
    12 days ago

    Imagine how much power is wasted on this unfortunate necessity.

    Now imagine how much power will be wasted circumventing it.

    Fucking clown world we live in

    • Demdaru@lemmy.world
      link
      fedilink
      English
      arrow-up
      45
      arrow-down
      2
      ·
      12 days ago

      On on hand, yes. On the other…imagine frustration of management of companies making and selling AI services. This is such a sweet thing to imagine.

      • Melvin_Ferd@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        9
        ·
        12 days ago

        I just want to keep using uncensored AI that answers my questions. Why is this a good thing?

          • Melvin_Ferd@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            9
            ·
            edit-2
            10 days ago

            Good I ignore that too. I want a world where information is shared. I can get behind the

            • explodicle@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              10
              ·
              11 days ago

              Get behind the what?

              Perhaps an AI crawler crashed Melvin’s machine halfway through the reply, denying that information to everyone else!

              • Melvin_Ferd@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                10 days ago

                Capitalist pigs are paying media to generate AI hatred to help them convince you people to get behind laws that all limit info sharing under the guise of IP and copyright

        • CileTheSane@lemmy.ca
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          3
          ·
          11 days ago

          Because it’s not AI, it’s LLMs, and all LLMs do is guess what word most likely comes next in a sentence. That’s why they are terrible at answering questions and do things like suggest adding glue to the cheese on your pizza because somewhere in the training data some idiot said that.

          The training data for LLMs come from the internet, and the internet is full of idiots.

          • Melvin_Ferd@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            7
            ·
            11 days ago

            That’s what I do too with less accuracy and knowledge. I don’t get why I have to hate this. Feels like a bunch of cavemen telling me to hate fire because it might burn the food

            • CileTheSane@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 days ago

              Because we have better methods that are easier, cheaper, and less damaging to the environment. They are solving nothing and wasting a fuckton of resources to do so.

              It’s like telling cavemen they don’t need fire because you can mount an expedition to the nearest valcanoe to cook food without the need for fuel then bring it back to them.

              The best case scenario is the LLM tells you information that is already available on the internet, but 50% of the time it just makes shit up.

              • Melvin_Ferd@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                4
                ·
                10 days ago

                Wasteful?

                Energy production is an issue. Using that energy isn’t. LLMs are a better use of energy than most of the useless shit we produce everyday.

                • CileTheSane@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 days ago

                  Did the LLMs tell you that? It’s not hard to look up on your own:

                  Data centers, in particular, are responsible for an estimated 2% of electricity use in the U.S., consuming up to 50 times more energy than an average commercial building, and that number is only trending up as increasingly popular large language models (LLMs) become connected to data centers and eat up huge amounts of data. Based on current datacenter investment trends,LLMs could emit the equivalent of five billion U.S. cross-country flights in one year.

                  https://cse.engin.umich.edu/stories/power-hungry-ai-researchers-evaluate-energy-consumption-across-models

                  Far more than straightforward search engines that have the exact same information and don’t make shit up half the time.

    • zovits@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 days ago

      From the article it seems like they don’t generate a new labyrinth for every single time: Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval."

  • RelativeArea1@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    121
    arrow-down
    1
    ·
    edit-2
    11 days ago

    this is some fucking stupid situation, we somewhat got a faster internet and these bots messing each other are hogging the bandwidth.

    • dual_sport_dork 🐧🗡️@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      edit-2
      12 days ago

      Especially since the solution I cooked up for my site works just fine and took a lot less work. This is simply to identify the incoming requests from these damn bots – which is not difficult, since they ignore all directives and sanity and try to slam your site with like 200+ requests per second, that makes 'em easy to spot – and simply IP ban them. This is considerably simpler, and doesn’t require an entire nuclear plant powered AI to combat the opposition’s nuclear plant powered AI.

      In fact, anybody who doesn’t exhibit a sane crawl rate gets blocked from my site automatically. For a while, most of them were coming from Russian IP address zones for some reason. These days Amazon is the worst offender, I guess their Rufus AI or whatever the fuck it is tries to pester other retail sites to “learn” about products rather than sticking to its own domain.

      Fuck 'em. Route those motherfuckers right to /dev/null.

    • IninewCrow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      12 days ago

      It’s what I’ve been saying about technology for the past decade or two … we’ve hit an upper limit to our technological development … that limit is on individual human greed where small groups of people or massively wealthy people hinder or delay any further development because they’re always trying to find ways to make money off it, prevent others from making money off it, monopolize an area or section of society … capitalism is literally our world’s bottleneck and it’s being choked off by an oddly shaped gold bar at this point.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 days ago

      Lol website traffic accounts for like 1% of bandwidth budget. 1 netflix movie is like 20k web pages.

  • oldfart@lemm.ee
    link
    fedilink
    English
    arrow-up
    77
    ·
    12 days ago

    So the web is a corporate war zone now and you can choose feudal protection or being attacked from all sides. What a time to be alive.

    • theparadox@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      12 days ago

      There is also the corpo verified id route. In order to avoid the onslaught of AI bots and all that comes with them you’ll need to sacrifice freedom, anonymity, and privacy like a good little peasant to prove you aren’t a bot… and so will everyone else. You’ll likely be forced to deal with whatever AI bots are forced upon you while within the walls but better an enemy you know I guess?

  • DigitalDilemma@lemmy.ml
    link
    fedilink
    English
    arrow-up
    53
    ·
    11 days ago

    Surprised at the level of negativity here. Having had my sites repeatedly DDOSed offline by Claudebot and others scraping the same damned thing over and over again, thousands of times a second, I welcome any measures to help.

  • Rose@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    ·
    11 days ago

    I have no idea why the makers of LLM crawlers think it’s a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than “well, we just don’t want you to do that”. They’re usually more like “why would you even do that?”

    Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said “please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)”. Again: Why would anyone index those?

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      26
      ·
      11 days ago

      Because you are coming from the perspective of a reasonable person

      These people are billionaires who expect to get everything for free. Rules are for the plebs, just take it already

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      10 days ago

      Because it takes work to obey the rules, and you get less data for it. The theoretical competitor could get more ignoring those and get some vague advantage for it.

      I’d not be surprised if the crawlers they used were bare-basic utilities set up to just grab everything without worrying about rules and the like.

  • AnthropomorphicCat@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    12 days ago

    So the world is now wasting energy and resources to generate AI content in order to combat AI crawlers, by making them waste more energy and resources. Great! 👍

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      5
      ·
      edit-2
      12 days ago

      The energy cost of inference is overstated. Small models, or “sparse” models like Deepseek are not expensive to run. Training is a one-time cost that still pales in comparison to, like, making aluminum.

      Doubly so once inference goes more on-device.

      Basically, only Altman and his tech bro acolytes want AI to be cost prohibitive so he can have a monopoly. Also, he’s full of shit, and everyone in the industry knows it.

      AI as it’s implemented has plenty of enshittification, but the energy cost is kinda a red herring.

  • quack@lemmy.zip
    link
    fedilink
    English
    arrow-up
    42
    ·
    edit-2
    12 days ago

    Generating content with AI to throw off crawlers. I dread to think of the resources we’re wasting on this utter insanity now, but hey who the fuck cares as long as the line keeps going up for these leeches.

  • TorJansen@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    2
    ·
    12 days ago

    And soon, the already AI-flooded net will be filled with so much nonsense that it becomes impossible for anyone to get some real work done. Sigh.

    • Fluke@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      11 days ago

      And consumed the power output of a medium country to do it.

      Yeah, great job! 👍

      • LeninOnAPrayer@lemm.ee
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        11 days ago

        We truly are getting dumber as a species. We’re facing climate change but running some of the most power hungry processers in the world to spit out cooking recipes and homework answers for millions of people. All to better collect their data to sell products to them that will distract them from the climate disaster our corporations have caused. It’s really fun to watch if it wasn’t so sad.

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    11 days ago

    I’m imagining a sci-fi spin on this where AI generators are used to keep AI crawlers in a loop, and they accidentally end up creating some unique AI culture or relationship in the process.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    edit-2
    11 days ago

    Considering how many false positives Cloudflare serves I see nothing but misery coming from this.

    • Dave@lemmy.nz
      link
      fedilink
      English
      arrow-up
      16
      ·
      11 days ago

      In terms of Lemmy instances, if your instance is behind cloudflare and you turn on AI protection, federation breaks. So their tools are not very helpful for fighting the AI scraping.

    • Xella@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 days ago

      Lol I work in healthcare and Cloudflare regularly blocks incoming electronic orders because the clinical notes “resemble” SQL injection. Nurses type all sorts of random stuff in their notes so there’s no managing that. Drives me insane!

  • XeroxCool@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    12 days ago

    Will this further fuck up the inaccurate nature of AI results? While I’m rooting against shitty AI usage, the general population is still trusting it and making results worse will, most likely, make people believe even more wrong stuff.

    • ladel@feddit.uk
      link
      fedilink
      English
      arrow-up
      22
      ·
      edit-2
      12 days ago

      The article says it’s not poisoning the AI data, only providing valid facts. The scraper still gets content, just not the content it was aiming for.

      E:

      It is important to us that we don’t generate inaccurate content that contributes to the spread of misinformation on the Internet, so the content we generate is real and related to scientific facts, just not relevant or proprietary to the site being crawled.

      • XeroxCool@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        Thank you for catching that. Even reading through again, I couldn’t find it while skimming. With the mention of X2 and RSS, I assumed that paragraph would just be more technical description outside my knowledge. Instead, what I did hone in on was

        “No real human would go four links deep into a maze of AI-generated nonsense.”

        Leading me to be pessimistic.

  • weremacaque@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    11 days ago

    You have Thirteen hours in which to solve the labyrinth before your baby AI becomes one of us, forever.