I know the reputation that AI has on Lemmy, however I’ve found that some users (like myself) have found that LLMs can be useful tools.

What are fellow AI users using these tools for? Furthermore, what models are you using that find the most useful?

  • Balerion@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    13
    ·
    14 days ago

    AI is great at helping me multitask. For example, with AI, I can generate misinformation and destroy the environment at the same time!

    • Rhynoplaz@lemmy.world
      link
      fedilink
      arrow-up
      21
      arrow-down
      1
      ·
      13 days ago

      Shit. Where I come from, people don’t need AI for that. They just hang a Trump flag in the back of their truck and roll coal through town.

    • Electric@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      13 days ago

      It sucks so much that if the US kept up with green energy infrastructure (or nuclear power) all these datacenters (not just AI) could be running on abudant and cheap power without killing our environment.

      xAI running off of fucking diesel generators should be a crime but environmental and human health issues get less attention than “look everyone it called itself Hitler, so crazy!!!”

    • tpihkal@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      3
      ·
      13 days ago

      Can you provide some specific examples? I can think of a few ways to implement some of that for my own use case.

    • occultist8128@infosec.pub
      link
      fedilink
      arrow-up
      1
      arrow-down
      4
      ·
      edit-2
      13 days ago

      Not all AI are bad, there are types of AI that actually useful (in a good terms) for people. Don’t refer AI as LLMs. LLM is just a branch of AI. SMH people…

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    5
    ·
    edit-2
    13 days ago

    A lot of what we take for granted in software now days was once considered “AI”. Every NPC that follows your character in a video game while dynamically accounting for obstacles and terrain features uses the “A* algorithm” which is commonly taught in college courses on “AI”. Gmail sorting spam from non-spam (and not really all that well, honestly)? That’s “AI”. The first version of Google’s search algorithm was also “AI”.

    If you’re asking about LLMs, none. Zero. Zip. Nada. Not a goddamned one. LLMs are a scam that need to die in a fire.

  • ShittyBeatlesFCPres@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    13 days ago

    I don’t think there’s many consumer use cases for things like LLMs but highly focused, specialized models seem useful. Like protein folding, identifying promising medication, or finding patterns in giant scientific datasets.

    • garbagebagel@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      13 days ago

      I use it to help me write emails at work pretty regularly. I have pretty awful anxiety and it can take me a while to make sure my wording is correct. I don’t like using it, not really, but would I rather waste 4 hours of my time typing up an email to all the bosses that doesn’t sound stupid AF or would I rather ask for help and edit what it gives me instead.

      I know people use it to summarize policy or to brainstorm or to come up with very rough drafts.

      I understand the connotations of using it, but I would definitely not say there’s zero consumer use case for it at all.

  • ace_garp@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    12 days ago

    I tried Whisper+ voice-to-text this week.

    Uses a downloaded 250MB model from Hugging-Face, and processes voice completely offline.

    The accuracy is 100% for known words, so far.

    For transcribing texts, messages and diary entries.

    * I’d be interested to know if it has a large power drain per use.

  • Perspectivist@feddit.uk
    link
    fedilink
    arrow-up
    6
    ·
    13 days ago

    I use ChatGPT every single day, and I find it both extremely useful and entertaining.

    I mainly use it to help edit longer messages, bounce ideas around, and share random thoughts I know my friends wouldn’t be interested in. Honestly, it also has pretty much replaced Google for me.

    I basically think of it as a friend who’s really knowledgeable across a wide range of topics, excellent at writing, and far more civil than most people I run into online - but who’s also a bit delusional at times and occasionally talks out of their ass, which is why I can’t ever fully trust it. That said, it’s still a great first stop when I’m trying to solve a problem.

    • tpihkal@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      13 days ago

      I think that pretty much describes my own perceived relationship. I know it’s a tool, but it’s a conversational tool that can produce results faster than I can, even though I need to proof read it’s work before I accept it.

  • MIDItheKID@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    13 days ago

    I used GPT to help me plan a 2 week long road trip with my family. It was pretty fucking awesome at finding cool places to stop and activities for my kids to do.

    It definitely made some stupid ass suggestions that would have routed us far off our course, or suggested stopping at places 15 minutes into our trip, but sifting through the slop was still a lot quicker than doing all of the research myself.

    I also use GPT to make birthday cards. Have it generate an image of some kind of inside joke etc. I used to do these by hand, and this makes it way quicker.

    I also use it at work for sending out communications and stuff. It can take the information I have and format it and professionalize it really quick.

    I also use it for Powershell scripting here and there, but it does some really wacky stuff sometimes that I have to go in and fix. Or it halucinates entire modules that don’t exist and when I point it out it’s like “good catch! That doesn’t exist!” and it always gives me a little chuckle. My rule with AI and Powershell is that I don’t ask it to do things that I don’t already know how to do. I like to learn things and be good at my job, but I don’t mind using GPT to help with some of the busy work.

    • sem@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      12 days ago

      I got an email once from HR that said I got a bike commuter benefit I didn’t know about, and couldn’t find more information about in the attachment, so I emailed HR and it turns out they used AI to write the email, and wouldn’t be giving out any corrections or bike commuter benefits. Bullshit.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    13 days ago

    I run LLMs locally for scripting, ADD brainstorming/organization, automation, pseudo editors and all sorts of stuff, as they’re crazy good for the size now.

    I think my favorites are Nemotron 49B (for STEM), Qwen3 finetunes (for code), some esoteric 2.5 finetunes (for writing), and Jamba 52B (for analysis, RAG, chat, long context, this one is very underrated). They all fit in 24GB. And before anyone asks, I know they’re unreliable, yes. But they are self hosted and tools that work for me.

    I could run GLM 4.5 offloaded with a bit more RAM…

  • Electric@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    4
    ·
    13 days ago

    Copilot in VScode is something you’d have to tear out of my cold, dead hands. Pressing Tab to auto complete is so useful. I use the GPT 4.1 model or whatever it is called. I tried Gemini but for some reason it’s complete ass when doing code. Android Studio Gemini is worse than the free tier on the website.

    However, I’ve found the Gemini Pro model on the website is incredibly good for information assistance. To give an idea of my current uses, I have two chats pinned on it: fact checking and programming advice. I use the former for general research that would take more than a few minutes of Googling but need an answer now, and the latter for brainstorming code design or technical tutorials (recently had it help me set up a VM in WSL).

    One tool I wish I could use is ElevenLabs. Had a friend on the free tier of it make some really cool and convincing voice lines (I forgot what character it was) a long time ago. Looks easy to use too. I can’t justify spending money just to play with it but if I had a purpose for it, I would.

    • MisterCurtis@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      13 days ago

      Just today I was tinkering with Continue.dev extension for VSCode. Locally running the models and not having sensitive proprietary source code sent over the wire to a 3rd party service was a big requirement for me to even consider bringing AI into my IDE.

  • PeriodicallyPedantic@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    13 days ago

    Honestly I’m part of the problem a little bit.

    In my hobby project I used GitHub copilot, to help me ramp up on unfamiliar tech. I was integrating three unfamiliar platforms in an unfamiliar program language, and it helped expose the APIs and language features I didn’t know about. It was almost like a tutorial; it’d make some code that was kinda broken, but fixing it would introduce me to new language features and API resources that would help me. Which was nice because I struggle to just read API specs.

    I’ve also used it when on my d&d campaign to create images of new settings. It just a 3 player weekly game so it’s hard to justify paying an artist for a rush job. Not great, I know. I hope the furry community has more backbone than I do, because they’re singlehandedly keeping the illustration industry afloat at this point.

  • tatterdemalion@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    13 days ago

    LLMs are pretty good for language learning. I often ask ChatGPT to converse with me in Japanese or help me make a sentence sound more natural.

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    13 days ago

    The one that the other department tried, and which failed to meet expectations dramatically. Gave management a healthy dose of reality on “AI”.

  • burrito@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    14 days ago

    I’m running ollama and open-webui and some unsloth modified models for some general purpose stuff.

    The https://huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF model has been pretty good. Beware it’s a Chinese model so you can get some funny results if you ask about Tiananmen Square or if certain people resemble Winnie the Pooh. For making Linux configurations it works great.

    Some gemma3 models are okay but it doesn’t seem as good. Same for Phi4 models.

      • burrito@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        13 days ago

        It can run with a variety of systems. You just need to have enough VRAM on your video card to fit the model and then it can run pretty fast. There are models down to a couple hundred MB in size, but they’re quite limited. There are other models that are 245GB in size, though the bigger ones use a “mixture of experts” where only portions of the model are loaded as needed, and the rest stays unused for the particular task at hand. If you don’t have enough VRAM to fit the model, it will fall back to running on the CPU and using the system ram. Most of the operations are limited by the speed of the memory that’s running the model. Video card memory is much faster than system memory so that’s what helps it run a lot faster. It can still get the job done but you will have to wait quite a while for the output. There are ways of making the models smaller by using quantization. Quantization reduces the precision of the models parameters (the number with the b next to it in models i.e. 4b, 8b, 14b, 30b, etc.) by taking it from 32-bit data down to 8-bit or smaller. This allows more data to be packed in a smaller space, but it reduces accuracy a bit.

  • Aurenkin@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    13 days ago

    I’ve found LLMs in general helpful for coding specifically when I have to use tools or languages that I only have a passing familiarity with.

    In my life I’ve used Gemini for some fitness coaching alongside other sources of information and it has been quite helpful and motivating.

    • tpihkal@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      13 days ago

      What type of device do you use Gemini on for fitness?

      …and I’m sorry, so sorry, but “can Gemini fitness dick”?

  • herbz@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    13 days ago

    My CRM system at work has what called “Genius AI” integrated into it. When customer service reps receive calls that require site visits the AI auto fills the work ticket using the phone conversation adds in contact name and numbers and even puts a brief description of what service they require. The AI also transcribes our calls into text to be able to refer back to or get caught up on a job when someone is out sick. It wasnt Iife changing it wasnt forced but as a simple aid it makes life a bit easier.