Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • Furbag@lemmy.world
    link
    fedilink
    arrow-up
    27
    arrow-down
    2
    ·
    2 months ago

    Long, long before this AI craze began, I was warning people as a young 20-something political activist that we needed to push for Universal Basic Income because the inevitable march of technology would mean that labor itself would become irrelevant in time and that we needed to hash out a system to maintain the dignity of every person now rather than wait until the system is stressed beyond it’s ability to cope with massive layoffs and entire industries taken over by automation/AI. When the ability of the average person to sell their ability to work becomes fundamentally compromised, capitalism will collapse in on itself - I’m neither pro- nor anti-capitalist, but people have to acknowledge that nearly all of western society is based on capitalism and if capitalism collapses then society itself is in jeopardy.

    I was called alarmist, that such a thing was a long way away and we didn’t need “socialism” in this country, that it was more important to maintain the senseless drudgery of the 40-hour work week for the sake of keeping people occupied with work but not necessarily fulfilled because the alternative would not make the line go up.

    Now, over a decade later, and generative AI has completely infiltrated almost all creative spaces and nobody except tech bros and C-suite executives are excited about that, and we still don’t have a safety net in place.

    Understand this - I do not hate the idea of AI. I was a huge advocate of AI, as a matter of fact. I was confident that the gradual progression and improvement of technology would be the catalyst that could free us from the shackles of the concept of a 9-to-5 career. When I was a teenager, there was this little program you could run on your computer called Folding At Home. It was basically a number-crunching engine that uses your GPU to fold proteins, and the data was sent to researchers studying various diseases. It was a way for my online friends and I to flex how good our PC specs were with the number of folds we could complete in a given time frame and also we got to contribute to a good cause at the same time. These days, they use AI for that sort of thing, and that’s fucking awesome. That’s what I hope to see AI do more of - take the rote, laborious, time consuming tasks that would take one or more human beings a lifetime to accomplish using conventional tools and have the machine assist in compiling and sifting through the data to find all the most important aspects. I want to see more of that.

    I think there’s a meme floating around that really sums it up for me. Paraphrasing, but it goes “I thought that AI would do the dishes and fold my laundry so I could have more time for art and writing, but instead AI is doing all my art and writing so I have time to fold clothes and wash dishes.”.

    I think generative AI is both flawed and damaging, and it gives AI as a whole a bad reputation because generative AI is what the consumer gets to see, and not the AI that is being used as a tool to help people make their lives easier.

    Speaking of that, I also take issue with that fact that we are more productive than ever before, and AI will only continue to improve that productivity margin, but workers and laborers across the country will never see a dime of compensation for that. People might be able to do the work of two or even three people with the help of AI assistants, but they certainly will never get the salary of three people, and it means that two out of those three people probably don’t have a job anymore if demand doesn’t increase proportionally.

    I want to see regulations on AI. Will this slow down the development and advancement of AI? Almost certainly, but we’ve already seen the chaos that unfettered AI can cause to entire industries. It’s a small price to pay to ask that AI companies prove that they are being ethical and that their work will not damage the livelihood of other people, or that their success will not be born off the backs of other creative endeavors.

    • 𝕱𝖎𝖗𝖊𝖜𝖎𝖙𝖈𝖍@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      2 months ago

      Fwiw, I’ve been getting called an alarmist for talking about Trump’s and Republican’s fascist tendencies since at least 2016, if not earlier. I’m now comfortably living in another country.

      My point being that people will call you an alarmist for suggesting anything that requires them to go out of their comfort zone. It doesn’t necessarily mean you’re wrong, it just shows how stupid people are.

        • 𝕱𝖎𝖗𝖊𝖜𝖎𝖙𝖈𝖍@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          2 months ago

          It wasn’t overseas but moving my stuff was expensive, yes. Even with my company paying a portion of it. It’s just me and my partner in a 2br apartment so it’s honestly not a ton of stuff either.

  • BertramDitore@lemm.ee
    link
    fedilink
    English
    arrow-up
    19
    ·
    2 months ago

    I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.

    I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.

    Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

    Every step of any deductive process needs to be citable and traceable.

    • DomeGuy@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 months ago

      Clear reporting should include not just the incremental environmental cost of each query, but also a statement of the invested cost in the underlying training.

    • davidgro@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      2 months ago

      … I want clear evidence that the LLM … will never hallucinate or make something up.

      Nothing else you listed matters: That one reduces to “Ban all Generative AI”. Actually worse than that, it’s “Ban all machine learning models”.

      • mosiacmango@lemm.ee
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        2 months ago

        If “they have to use good data and actually fact check what they say to people” kills “all machine leaning models” then it’s a death they deserve.

        The fact is that you can do the above, it’s just much, much harder (you have to work with data from trusted sources), much slower (you have to actually validate that data), and way less profitable (your AI will be able to reply to way less questions) then pretending to be the “answer to everything machine.”

        • Redex@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          2 months ago

          The way generative AI works means no matter how good the data it’s still gonna bullshit and lie, it won’t “know” if it knows something or not. It’s a chaotic process, no ML algorithm has ever produced 100% correct results.

          • mosiacmango@lemm.ee
            link
            fedilink
            arrow-up
            5
            ·
            2 months ago

            That’s how they work now, trained with bad data and designed to always answer with some kind of positive response.

            They absolutely can be trained on actual data, trained to give less confident answers, and have an error checking process run on their output after they formulate an answer.

            • davidgro@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              2 months ago

              There’s no such thing as perfect data. Especially if there’s even the slightest bit of subjectivity involved.

              Even less existent is complete data.

              • mosiacmango@lemm.ee
                link
                fedilink
                arrow-up
                1
                ·
                2 months ago

                Perfect? Who said anything about perfect data? I said actually fact checked data. You keep movimg the bar on what possible as an excuse to not even try.

                They could indeed build models that worked on actual data from expert sources, and then have their agents check those sources for more correct info when they create an answer. They don’t want to, for all the same reasons I’ve already stated.

                It’s possible, it does not “doom” LLM, it just massively increases its accuracy and actual utility at the cost of money, effort and killing the VC hype cycle.

                • davidgro@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 months ago

                  The original thread poster (OTP?) implied perfection when they emphasized the “will never” part, and I was responding to that. For that matter it also excludes actual brains.

      • BertramDitore@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 months ago

        Let’s say I open a medical textbook a few different times to find the answer to something concrete, and each time the same reference material leads me to a different answer but every answer it provides is wrong but confidently passes it off as right. Then yes, that medical textbook should be banned.

        Quality control is incredibly important, especially when people will use these systems to make potentially life-changing decisions for them.

        • davidgro@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          2 months ago

          especially when people will use these systems to make potentially life-changing decisions for them.

          That specifically is the problem. I don’t have a solution, but treating and advertising these things like they think and know stuff is a mistake that of course the companies behind them are encouraging.

    • venusaur@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      This is awesome! The citing and tracing is already improving. I feel like no hallucinations is gonna be a while.

      How does it all get enforced? FTC? How does this become reality?

    • untakenusername@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      OpenAI, for example, needs to be regulated with the same intensity as a much smaller company

      not too long ago they went to Congress to get them to regulate the ai industry a lot more and wanted the govt to require licences to train large models. Large companies can benefit from regulations when they aren’t easy for smaller competitors to follow.

      And OpenAI should have no say in how they are regulated.

      For sure, otherwise regulation could be made too restrictive, lowing competition

      Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

      I think thats technically really difficult, but maybe if the output of the model was checked against preexisting sources that could happen, like what Google uses for Gemini

      Every step of any deductive process needs to be citable and traceable.

      I’m pretty sure this is completely impossible

    • minoscopede@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

      Every step of any deductive process needs to be citable and traceable.

      I mostly agree, but “never” is too high a bar IMO. It’s way, way higher than the bar even for humans. Maybe like 0.1% or something would be reasonable?

      Even Einstein misremembered things sometimes.

  • barryamelton@lemmy.ml
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    2 months ago

    That stealing copyrighted works would be as illegal for these companies as it is for normal people. Sick and tired of seeing them get away with it.

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    2 months ago

    Serious investigation into copyright breaches done by AI creators. They ripped off images and texts, even whole books, without the copyright owners permissions.

    If any normal person broke the laws like this, they would hand out prison sentences till kingdom come and fines the size of the US debt.

    I just ask for the law to be applied to all equally. What a surprising concept…

  • boaratio@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    2 months ago

    For it to go away just like Web 3.0 and NFTs did. Stop cramming it up our asses in every website and application. Make it opt in instead of maybe if you’re lucky, opt out. And also, stop burning down the planet with data center power and water usage. That’s all.

    Edit: Oh yeah, and get sued into oblivion for stealing every copyrighted work known to man. That too.

    Edit 2: And the tech press should be ashamed for how much they’ve been fawning over these slop generators. They gladly parrot press releases, claim it’s the next big thing, and generally just suckle at the teet of AI companies.

  • psion1369@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    2 months ago

    I want disclosure. I want a tag or watermark to let people know that AI was used. I want to see these companies pay dues for the content used in the similar vein that we have to pay for higher learning. And we need to stop calling it AI as well.

  • AsyncTheYeen@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    2 months ago

    People have negative sentiments towards AI under a captalist system, where the most successful is equal to most profitable and that does not translate into the most useful for humanity

    We have technology to feed everyone and yet we don’t We have technology to house everyone and yet we don’t We have technology to teach everyone and yet we don’t

    Captalist democracy is not real democracy

    • Randomgal@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      This is it. People don’t have feelings for a machine. People have feelings for the system and the oligarchs running things, but said oligarchs keep telling you to hate the inanimate machine.

  • Bwaz@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    2 months ago

    I’d like there to be a web-wide expectation by everyone that any AI generated text, comment, story or image be clearly marked as being AI. That people would feel incensed and angry when it wasn’t labeled so. Rather than wondering whether there were a person with a soul producing the content, or losing faith that real info could be found online.

  • kittenzrulz123@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    2 months ago

    I do not need AI and I do not want AI, I want to see it regulated to the point that it becomes severly unprofitable. The world is burning and we are heading face first towards a climate catastrophe (if we’re not already there), we DONT need machines to mass produce slop.

  • Bytemeister@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 months ago

    I’d like to have laws that require AI companies to publicly list their sources/training materials.

    I’d like to see laws defining what counts as AI, and then banning advertising non-compliant software and hardware as “AI”.

    I’d like to see laws banning the use of generative AI for creating misleading political, social, or legal materials.

    My big problems with AI right now, are that we don’t know what info has been scooped up by them. Companies are pushing misleading products as AI, while constantly overstating the capabilities and under-delivering, which will damage the AI industry as a whole. I’d also want to see protections to keep stupid and vulnerable people from believing AI generated content is real. Remember, a few years ago, we had to convince people not to eat tidepods. AI can be a very powerful tool for manipulating the ranks of stupid people.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    I generally pro AI but agree with the argument that having big tech hoard this technology is the real problem.

    The solution is easy and right there in front of everyone’s eyes. Force open source on everything. All datasets, models, model weights and so on have to be fully transparent. Maybe as far as hardware firmware should be open source.

    This will literally solve every single problem people have other than energy use which is a fake problem to begin with.

  • HeartyOfGlass@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    2 months ago

    My fantasy is for “everyone” to realize there’s absolutely nothing “intelligent” about current AI. There is no rationalization. It is incapable of understanding & learning.

    ChatGPT et al are search engines. That’s it. It’s just a better Google. Useful in certain situations, but pretending it’s “intelligent” is outright harmful. It’s harmful to people who don’t understand that & take its answers at face value. It’s harmful to business owners who buy into the smoke & mirrors. It’s harmful to the future of real AI.

    It’s a fad. Like NFTs and Bitcoin. It’ll have its die-hard fans, but we’re already seeing the cracks - it’s absorbed everything humanity’s published online & it still can’t write a list of real book recommendations. Kids using it to “vibe code” are learning how useless it is for real projects.

  • Sunsofold@lemmings.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 months ago

    Magic wish granted? Everyone gains enough patience to leave it to research until it can be used safely and sensibly. It was fine when it was an abstract concept being researched by CS academics. It only became a problem when it all went public and got tangled in VC money.

    • venusaur@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      Unfortunately, right now the world is providing the greatest level of research for AI.

      I feel like the only thing that the world universally bans is nuclear weapons. AI would have to become so dangerous that the world decides to leave it in the lab, but you can easily make an LLM at home. You can’t just make nuclear power in your room.

      How do you get your wish?

      • Sunsofold@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        If I knew how to grant my wish, it’d be less of a wish and more of a quest. Sadly, I don’t think there’s a way to give patience to the world.

        • venusaur@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          Yeah I don’t think our society is in a position mentally to have patience. We’ve trained our brains to demand a fast-paced variety of gratification at all costs.

          • Sunsofold@lemmings.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            We were already wired for it, but we didn’t have access to the thing’s we have now. It takes a lot of wealth to ride the hedonic treadmill, but our societies have reached a baseline wealth where it has become much more achievable to ride it almost all the time.

  • calcopiritus@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    2 months ago

    Energy consumption limit. Every AI product has a consumption limit of X GJ. After that, the server just shuts off.

    The limit should be high enough to not discourage research that would make generative AI more energy efficient, but it should be low enough that commercial users would be paying a heavy price for their waste of energy usage.

    Additionally, data usage consent for generative AI should be opt-in. Not opt-out.