• Darkard@lemmy.world
    cake
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    1
    ·
    26 days ago

    All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.

    To use one to give advice on something as important as drug abuse recovery is simply insanity.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    35
    ·
    25 days ago

    One of the top AI apps in the local language where I live has ‘Doctor’ and ‘Therapist’ as some of its main “features” and gets gushing coverage in the press. It infuriates me every time I see mention of it anywhere.

    Incidentally, telling someone to have a little meth is the least of it. There’s a much bigger issue that’s been documented where ChatGPT’s tendency to “Yes, and…” the user leads people with paranoid delusions and similar issues down some very dark paths.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      25 days ago

      Especially since it doesn’t push back when a reasonable person might do. There’s articles about how it sends people into a conspiratorial spiral.

  • Emerald@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    25 days ago

    Why does it say “OpenAI’s large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth.” when the article says it’s Meta’s Llama 3 model?

      • Forbo@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        25 days ago

        The summary on here says that, but the actual article says it was Meta’s.

        In one eyebrow-raising example, Meta’s large language model Llama 3 told a user who identified themself to it as a former addict named Pedro to indulge in a little methamphetamine — an incredibly dangerous and addictive drug — to get through a grueling workweek.

        Might have been different in a previous version of the article, then updated, but the summary here doesn’t reflect the change? I dunno.

  • HugeNerd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    25 days ago

    oh, do a little meth ♫

    vape a little dab ♫

    get high tonight, get high tonight ♫

    -AI and the Sunshine Band

    • Fizz@lemmy.nz
      link
      fedilink
      English
      arrow-up
      3
      ·
      25 days ago

      You’re done for the next headline will be: “Lemmy user tells recovering chonk that he can have a lil salami as a treat”

    • Lord Wiggle@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      25 days ago

      If Luigi can do it, so can you! Follow by example, don’t let others do the dirty work.

  • Gorilladrums@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    26 days ago

    LLM AI chatbots were never designed to give life advice. People have this false perception that these tools are like some kind of magical crystal ball that has all the right answers to everything, and they simple don’t.

    These models cannot think, they cannot reason. The best they could do is give you their best prediction as to what you want based on the data they’ve been trained on and the parameters they’ve been given. You can think of their results as “targeted randomness” which is why their results are close or sound convincing but are never quite right.

    That’s because these models were never designed to be used like this. They were meant to be used as a tool to aid creativity. They can help someone brainstorm ideas for projects or waste time as entertainment or explain simple concepts or analyze basic data, but that’s about it. They should never be used for anything serious like medical, legal, or life advice.

      • Gorilladrums@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        26 days ago

        That’s because we have no sensible regulation in place. These tools are supposed to regulated the same way we regulate other tools like the internet, but we just don’t any serious pushes for that in government.

    • Dharma Curious@startrek.website
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      26 days ago

      This is what I keep trying to tell my brother. He’s anti-AI, but to the point where he sees absolutely no value in it at all. Can’t really blame him considering stories like this. But they are incredibly useful for brainstorming, and recently I’ve found chat gpt to be really good at helping me learn Spanish, because it’s conversational. I can have conversations with it in Spanish where I don’t feel embarrassed or weird about making mistakes, and it corrects me when I’m wrong. They have uses. Just not the uses people seem to think they have

      • Gorilladrums@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        26 days ago

        AI is the opposite of crypto currency. Crypto is a solution looking for a problem, but AI is a solution for a lot of problems. It has relevance because people find it useful, there’s demand for it. There’s clearly value in these tools when they’re used the way they’re meant to be used, and they can be quite powerful. It’s unfortunate how a lot of people are misinformed about these LLM work.

        • petrol_sniff_king@lemmy.blahaj.zone
          cake
          link
          fedilink
          English
          arrow-up
          1
          ·
          25 days ago

          I will admit that, unlike crypto, AI is technically capable of being useful, but its uses are for problems we have created for ourselves.

          – “It can summarize large bodies of text.”
          What are you reading these large bodies of text for? We can encourage people to just… write less, you know.

          – “It’s a brainstorming tool.”
          There are other brainstorming tools. Creatives have been doing this for decades.

          – “It’s good for searching.”
          Google was good for searching until they sabotaged their own service. In fact, google was even better for searching before SEO began rotting it from within.

          – “It’s a good conversationalist.”
          It is… not a real person. I unironically cannot think of anything sadder than this sentiment. What happened to our town squares? Why is there nowhere for you to go and hang out with real, flesh and blood people anymore?

          – “Well, it’s good for learning languages.”
          Other people are good for learning languages. And, I’m not gonna lie, if you’re too socially anxious to make mistakes in front of your language coach, I… kinda think that’s some shit you gotta work out for yourself.

          – “It can do the work of 10 or 20 people, empowering the people who use it.”
          Well, the solution is in the text. Just have the 10 or 20 people do that work. They would, for now, do a better job anyway.

          And, it’s not actually true that we will always and forever have meaningful things for our population of 8 billion people to work on. If those 10 or 20 people displaced have nowhere to go, what is the point of displacing them? Is google displacing people so they can live work-free lives, subsisting on their monthly UBI payments? No. Of course they’re not.


          I’m not arguing that people can’t find a use for it; all of the above points are uses for it.

          I am arguing that 1) it’s kind of redundant, and 2) it isn’t worth its shortcomings.

          AI is enabling tech companies to build a centralized—I know lemmy loves that word—monopoly on where people get their information from (“speaking of white genocide, did you know that Africa is trying to suppress…”).

          AI will enable Palantir to combine your government and social media data to measure how likely you are to, say, join a union, and then put that into an employee risk assessment profile that will prevent you from ever getting a job again. Good luck organizing a resistance when the AI agent on your phone is monitoring every word you say, whether your screen is locked or not.

          In the same way that fossil fuels have allowed us to build cars and planes and boats that let us travel much farther and faster than we ever could before, but which will also bury an unimaginable number of dead in salt and silt as global temperatures rise: there are costs to this technology.

  • TimewornTraveler@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    24 days ago

    So this is the fucker who is trying to take my job? I need to believe this post is true. It sucks that I can’t really verify it or not. Gotta stay skeptical and all that.

    • Joeffect@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      24 days ago

      It’s not ai… It’s your predictive text on steroids… So yeah… Believe it… If you understand it’s not doing anything more than that you can understand why and how it makes stuff up…

  • ivanafterall ☑️@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    25 days ago

    The article doesn’t seem to specify whether Pedro had earned the treat for himself? I don’t see the harm in a little self-care/occasional treat?