✺roguetrick✺

  • 1 Post
  • 84 Comments
Joined 2 years ago
cake
Cake day: February 16th, 2024

help-circle

  • When beekeepers take out honey to sell, or, increasingly, when there isn’t enough pollen available, they give the insects supplementary food.

    But that food is made up of protein flour, sugar and water, and has always lacked the nutrients bees require. It is like humans eating a diet without carbohydrates, amino acids, or other vital nutrients.

    This is such dystopia stuff. We’ve solved the fact that we were killing the bees by stealing their honey and replacing it with sugar water(while trucking them from the south to their California Central valley almond crop monocultures and hoping they’ll survive on that sugar water) with this “super food” that actually contains proteins required for life. I understand I’m not supposed to be a downer on uplifting news but this so very much does not uplift me.


  • Still not one cause though. Maybe a proximate cause but the bees have less forage due to monoculture and climate change screwing with plant cycles. This results in malnutrition and increased susceptibility to viruses. The anti mite drugs allowed them to limp along while feeding them sugar water till they got to orchards but obviously that’s not going to cut it. Competitive pollinator services racing to the bottom unsustainably leading to a mass die off as soon as more stress is introduced is the systemic issue at play here



  • Pre print journalism fucking bugs me because the journalists themselves can’t actually judge if anything is worth discussing so they just look for click bait shit.

    This methodology to discover what interventions do in human environments seems particularly deranged to me though:

    We address this question using a novel method – generative social simulation – that embeds Large Language Models within Agent-Based Models to create socially rich synthetic platforms.

    LLM agents trained on social media dysfunction recreate it unfailingly. No shit. I understand they gave them personas to adopt as prompts, but prompts cannot and do not override training data. As we’ve seen multiple times over and over. LLMs fundamentally cannot maintain an identity from a prompt. They are context engines.

    Particularly concerning sf the silo claims. LLMs riffing on a theme over extended interactions because the tokens keep coming up that way is expected behavior. LLMs are fundamentally incurious and even more prone to locking into one line of text than humans as the longer conversation reinforces it.

    Determining the functionality of what the authors describe as a novel approach might be more warranted than making conclusions on it.