A survey of more than 2,000 smartphone users by second-hand smartphone marketplace SellCell found that 73% of iPhone users and a whopping 87% of Samsung Galaxy users felt that AI adds little to no value to their smartphone experience.

SellCell only surveyed users with an AI-enabled phone – thats an iPhone 15 Pro or newer or a Galaxy S22 or newer. The survey doesn’t give an exact sample size, but more than 1,000 iPhone users and more than 1,000 Galaxy users were involved.

Further findings show that most users of either platform would not pay for an AI subscription: 86.5% of iPhone users and 94.5% of Galaxy users would refuse to pay for continued access to AI features.

From the data listed so far, it seems that people just aren’t using AI. In the case of both iPhone and Galaxy users about two-fifths of those surveyed have tried AI features – 41.6% for iPhone and 46.9% for Galaxy.

So, that’s a majority of users not even bothering with AI in the first place and a general disinterest in AI features from the user base overall, despite both Apple and Samsung making such a big deal out of AI.

  • clonedhuman@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 hours ago

    The consumer-side AI that a handful of multi-billion-dollar companies keep peddling to us is just a way for them to attempt to justify AI to us. Otherwise, it consumes MASSIVE amounts of our energy capacities and is primarily being used in ways that harm us.

    And, of course, there’s nothing they direct at us that isn’t ultimately (and solely) for their benefit–our every use of their AI helps train their models, and eventually it will simply be groups of billionaires competing against one another to form the most powerful model that allows them to dominate us and their competitors.

    As long as this technology remains determined by those whose entire existence is organized around domination, it will be a sum harm to all of us. We’d have to free it from their grips to make it meaningful in our daily lives.

  • lack@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    15 hours ago

    Apple Intelligence is trash and only lasted 2 days on my 16 pro. Not turning it back on either.

    • Daelsky@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      14 hours ago

      I’m on my iPhone 12 since it came out in sept 2020 (I bought it on Halloween 2020 lol) and apart from battery health being 77%, I have NO reasons to upgrade and even then, I’ll change the battery when it gets to 70% and… that’s it.

      Phones just aren’t exciting anymore. I used to watch so much phone reviews on YouTube and now they are all just… the same. Folding phones aren’t that interesting for me. I saw that there is a new battery technology, but that’s like the only new fun feature I’m interested in.

      Most performance upgrades aren’t used in the real world and AI suuuuucks

  • NightCrawlerProMax@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    14 hours ago

    I’m a software engineer and GitHub Copilot as an AI pair programmer has vastly improved my productivity. Also, I use ChatGPT extensively to help with miscellaneous stuff. Apart from these two, I don’t really find other AI implementations useful.

  • 9488fcea02a9@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    55
    ·
    1 day ago

    I hate that i can no longer trust what comes out of my phone camera to be an accurate representation of reality. I turn off all the AI enhancement stuff but who knows what kind of fuckery is baked into the firmware.

    NO, i dont want fake AI depth of field. NO, i do not want fake AI “makeup” fixing my ugly face. NO, i do not want AI deleting tourists in the background of my picture of the eiffel tower.

    NO, i do not want AI curating my memories and reality. Sure, my vacation photos have shitty lighting and bad composition. But they are MY photos and MY memories of something i experienced personally. AI should not be “fixing” that for me

  • Pyr_Pressure@lemmy.ca
    link
    fedilink
    English
    arrow-up
    46
    ·
    1 day ago

    It is absolutely useless for everyday simple tasks I find.

    Who the fuck needs AI to SUMMARIZE an EMAIL, GOOGLE?

    IT’S FIVE LINES

    Get out of my face Gemini!

    • Daelsky@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      14 hours ago

      Or the shitty notification summary. If someone wrote something to me, then it’s important enough for me to read it. I don’t need 3 bullet points with distorted info from AI.

    • lohky@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      23 hours ago

      Yahoo was using their shitty AI tool to summarize emails THEN REPLACE THE FUCKING SUBJECT LINES WITH THE SUMMARY!

      It immediately hallucinated raffle winners for a sneaker company and iirc they started getting death threats.

    • Mniot@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      It’d be way less offensive if it was just present as an option, instead of dancing around flashing at me

  • nuko147@lemm.ee
    link
    fedilink
    English
    arrow-up
    45
    ·
    1 day ago

    This is what happens when companies prioritize hype over privacy and try to monetize every innovation. Why pay €1,500 for a phone only to have basic AI features? AI should solve real problems, not be a cash grab.

    Imagine if AI actually worked for users:

    • Show me all settings to block data sharing and maximize privacy.
    • Explain how you optimized my battery last week and how much time it saved.
    • Automatically silence spam calls without selling my data to third parties.
    • Detect and block apps that secretly drain data or access my microphone.
    • Automatically organize my photos by topic without uploading them to the cloud.
    • Make everything i could do with TASKER with only just saying it in plain words.
    • Hominine@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      1 day ago

      Make everything i could do with TASKER with only just saying it in plain words.

      Stop, I can only get so hard.

  • Katana314@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    Much like certain other trends like 3D TVs, this helps us see how often “visionaries” at the top of a company are charmed by ideas that no one on the ground is interested in. Same with blockchain, cryptocurrency, and so many other buzzwords.

    So maybe I’ll mention it again: The Accountable Capitalism Act would require 40% of a company’s board be made up of democratically voted employees, who can provide more practical input about how top-level decisions would affect the people working there.

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 hours ago

      I could actually see 3D TVs taking off, even with the requirement for glasses. At the time, there was a fad for 3D movies in theaters. But, they needed to have gotten with content creators so that there was a reason to own one. There was no content, so no one invested, so probably in a year or two there’s going to be some Youtubers making videos of “I finally found Sony’s forgotten 3D TV.”

    • Ilovethebomb@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 hours ago

      I can see why people thought 3d tvs were a great idea, until they actually experienced it for themselves. It also didn’t help that so much content wasn’t genuinely shot in 3d, either, but altered in post.

  • Killer57@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    I have Google Gemini turned off on my pixel, because I find that it makes my experience genuinely worse.

  • diffusive@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 day ago

    I hate that nowadays AI == LLM/chatbot.

    I love the AI classifiers that keep me safe from spam or that help me categorise pictures. I love the AI based translators that allow me to write in virtually any language almost like a real speaker.

    What I hate is these super advanced stocastic parrots that manage to pass the Turing test and, so, people assume they think.

    I am pretty sure that they asked specifically about LLM/chatbots the percentage of people not caring would be even higher

    • BoJackHorseman@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      AI present on Apple and Samsung phones are indeed useless.

      They have small language models that summarise notification and rewrite your messages and emails. Those are pretty useless.

      Image editing AI that removes unwanted people from your photos have some use.

      However top AI tools like deep research, Cursor which millions of developers are using to assist developers with coding are objectively very useful.

  • Stormy1701@lemmy.zip
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    AI is a bad idea completely and if people cared at all about their privacy they should disable it.

    It’s all well and good to say that AI categorises pictures and allows you to search for people and places in them, but how do you think that is accomplished? The AI scan and remembers every picture on your phone. Same with email summaries. It’s reading your emails too.

    The ONLY assurance that this data isn’t being sent back to HQ is the companies word that it isn’t. And being closed source we have no possible way of auditing their use of data to see if that’s true.

    Do you trust Apple and/or Google? Because you shouldn’t.

    Especially now when setting up a new AI capable iPhone or iPad Apple Intelligence is enabled by DEFAULT.

    It should be OPT-IN, not opt-out.

    All AI can ever really do is invade your privacy and make humans even more stupid than they are already. Now they don’t even have to go to a search engine to look for things. They ask the AI and blindly believe what ever bullshit it regurgitates at them.

    AI is dangerous on many levels because soon it will be deciding who gets hired for a new job, who gets seen first in the ER and who gets that heart transplant, and who dies.

  • melfie@lemmings.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Although I think Steve Jobs was a real piece of shit, his product instincts were often on point, and his message in this video really stuck with me. I think companies shoehorning AI in everything would do well to start with something useful they want to enable and work backwards to the technology as he described here:

    https://m.youtube.com/watch?v=48j493tfO-o

  • umbrella@lemmy.ml
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    1 day ago

    please burst that bubble already so i can get a cheap second hand server grade gpu

  • ZeroGravitas@lemm.ee
    link
    fedilink
    English
    arrow-up
    173
    arrow-down
    9
    ·
    2 days ago

    A 100% accurate AI would be useful. A 99.999% accurate AI is in fact useless, because of the damage that one miss might do.

    It’s like the French say: Add one drop of wine in a barrel of sewage and you get sewage. Add one drop of sewage in a barrel of wine and you get sewage.

    • Dojan@lemmy.world
      link
      fedilink
      English
      arrow-up
      58
      arrow-down
      4
      ·
      2 days ago

      I think it largely depends on what kind of AI we’re talking about. iOS has had models that let you extract subjects from images for a while now, and that’s pretty nifty. Affinity Photo recently got the same feature. Noise cancellation can also be quite useful.

      As for LLMs? Fuck off, honestly. My company apparently pays for MS CoPilot, something I only discovered when the garbage popped up the other day. I wrote a few random sentences for it to fix, and the only thing it managed to consistently do was screw the entire text up. Maybe it doesn’t handle Swedish? I don’t know.

      One of the examples I sent to a friend is as follows, but in Swedish;

      Microsoft CoPilot is an incredibly poor product. It has a tendency to make up entirely new, nonsensical words, as well as completely mangle the grammar. I really don’t understand why we pay for this. It’s very disappointing.

      And CoPilot was like “yeah, let me fix this for you!”

      Microsoft CoPilot is a comedy show without a manuscript. It makes up new nonsense words as though were a word-juggler on circus, and the grammar becomes mang like a bulldzer over a lawn. Why do we pay for this? It is buy a ticket to a show where actosorgets their lines. Entredibly disappointing.

      • Oggyb@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        That’s so beautifully illustrative of what the LLM is actually doing behind the curtain! What a mess.

        • Dojan@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          Yeah, it wonks the tokens up.

          I actually really like machine learning. It’s been a fun field to follow and play around with for the past decade or so. It’s the corpo-facist BS that’s completely tainted it.

    • Kaja • she/her@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      6
      ·
      edit-2
      2 days ago

      We’re not talking about an AI running a nuclear reactor, this article is about AI assistants on a personal phone. 0.001% failure rates for apps on your phone isn’t that insane, and generally the only consequence of those failures would be you need to try a slightly different query. Tools like Alexa or Siri mishear user commands probably more than 0.001% of the time, and yet those tools have absolutely caught on for a significant amount of people.

      The issue is that the failure rate of AI is high enough that you have to vet the outputs which typically requires about as much work as doing whatever you wanted the AI to do yourself, and using AI for creative things like art or videos is a fun novelty, but isn’t something that you’re doing regularly and so your phone trying to promote apps that you only want to use once in a blue moon is annoying. If AI were actually so useful you could query it with anything and 99.999% of the time get back exactly what you wanted, AI would absolutely become much more useful.

    • NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      18
      ·
      2 days ago

      People love to make these claims.

      Nothing is “100% accurate” to begin with. Humans spew constant FUD and outright malicious misinformation. Just do some googling for anything medical, for example.

      So either we acknowledge that everything is already “sewage” and this changes nothing or we acknowledge that people already can find value from searching for answers to questions and they just need to apply critical thought toward whether I_Fucked_your_mom_416 on gamefaqs is a valid source or not.

      Which gets to my big issue with most of the “AI Assistant” features. They don’t source their information. I am all for not needing to remember the magic incantations to restrict my searches to a single site or use boolean operators when I can instead “ask jeeves” as it were. But I still want the citation of where information was pulled from so I can at least skim it.

      • AnAmericanPotato@programming.dev
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        1
        ·
        2 days ago

        99.999% would be fantastic.

        90% is not good enough to be a primary feature that discourages inspection (like a naive chatbot).

        What we have now is like…I dunno, anywhere from <1% to maybe 80% depending on your use case and definition of accuracy, I guess?

        I haven’t used Samsung’s stuff specifically. Some web search engines do cite their sources, and I find that to be a nice little time-saver. With the prevalence of SEO spam, most results have like one meaningful sentence buried in 10 paragraphs of nonsense. When the AI can effectively extract that tiny morsel of information, it’s great.

        Ideally, I don’t ever want to hear an AI’s opinion, and I don’t ever want information that’s baked into the model from training. I want it to process text with an awareness of complex grammar, syntax, and vocabulary. That’s what LLMs are actually good at.

        • NuXCOM_90Percent@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          9
          ·
          2 days ago

          Again: What is the percent “accurate” of an SEO infested blog about why ivermectin will cure all your problems? What is the percent “accurate” of some kid on gamefaqs insisting that you totally can see Lara’s tatas if you do this 90 button command? Or even the people who insist that Jimi was talking about wanting to kiss some dude in Purple Haze.

          Everyone is hellbent on insisting that AI hallucinates and… it does. You know who else hallucinates? Dumbfucks. And the internet is chock full of them. And guess what LLMs are training on? Its the same reason I always laugh when people talk about how AI can’t do feet or hands and ignore the existence of Rob Liefeld or WHY so many cartoon characters only have four fingers.

          Like I said: I don’t like the AI Assistants that won’t tell me where they got information from and it is why I pay for Kagi (they are also AI infested but they put that at higher tiers so I get a better search experience at the tier I pay for). But I 100% use stuff like chatgpt to sift through the ninety bazillion blogs to find me a snippet of a helm chart that I can then deep dive on whether a given function even exists.

          But the reality is that people are still benchmarking LLMs against a reality that has never existed. The question shouldn’t be “we need this to be 100% accurate and never hallucinate” and instead be “What web pages or resources were used to create this answer” and then doing what we should always be doing: Checking the sources to see if they at least seem trustworthy.

          • AnAmericanPotato@programming.dev
            link
            fedilink
            English
            arrow-up
            10
            ·
            2 days ago

            Again: What is the percent “accurate” of an SEO infested blog

            I don’t think that’s a good comparison in context. If Forbes replaced all their bloggers with ChatGPT, that might very well be a net gain. But that’s not the use case we’re talking about. Nobody goes to Forbes as their first step for information anyway (I mean…I sure hope not…).

            The question shouldn’t be “we need this to be 100% accurate and never hallucinate” and instead be “What web pages or resources were used to create this answer” and then doing what we should always be doing: Checking the sources to see if they at least seem trustworthy.

            Correct.

            If we’re talking about an AI search summarizer, then the accuracy lies not in how correct the information is in regard to my query, but in how closely the AI summary matches the cited source material. Kagi does this pretty well. Last I checked, Bing and Google did it very badly. Not sure about Samsung.

            On top of that, the UX is critically important. In a traditional search engine, the source comes before the content. I can implicitly ignore any results from Forbes blogs. Even Kagi shunts the sources into footnotes. That’s not a great UX because it elevates unvetted information above its source. In this context, I think it’s fair to consider the quality of the source material as part of the “accuracy”, the same way I would when reading Wikipedia. If Wikipedia replaced their editors with ChatGPT, it would most certainly NOT be a net gain.

          • ZeroGravitas@lemm.ee
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            2 days ago

            You know, I was happy to dig through 9yo StackOverflow posts and adapt answers to my needs, because at least those examples did work for somebody. LLMs for me are just glorified autocorrect functions, and I treat them as such.

            A colleague of mine had a recent experience with Copilot hallucinating a few Python functions that looked legit, ran without issue and did fuck all. We figured it out on testing, but boy was that a wake up call (colleague in question has what you might call an early adopter mindset).

      • ZeroGravitas@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        I think you nailed it. In the grand scheme of things, critical thinking is always required.

        The problem is that, when it comes to LLMs, people seem to use magical thinking instead. I’m not an artist, so I oohd and aahd at some of the AI art I got to see, especially in the early days, when we weren’t flooded with all this AI slop. But when I saw the coding shit it spewed? Thanks, I’ll pass.

        The only legit use of AI in my field that I know of is an unit test generator, where tests were measured for stability and code coverage increase before being submitted to dev approval. But actual non-trivial production grade code? Hell no.

        • NuXCOM_90Percent@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          edit-2
          2 days ago

          Even those examples are the kinds of things that “fall apart” if you actually think things through.

          Art? Actual human artists tend to use a ridiculous amount of “AI” these days and have been for well over a decade (probably closer to two, depending on how you define “AI”). Stuff like magic erasers/brushes are inherently looking at the picture around it (training data) and then extrapolating/magicking what it would look like if you didn’t have that logo on your shirt and so forth. Same with a lot of weathering techniques/algorithms and so forth.

          Same with coding. People more or less understand that anyone who is working on something more complex than a coding exercise is going to be googling a lot (even if it is just that you will never ever remember how to do file i/o in python off the top of your head). So a tool that does exactly that is… bad?

          Which gets back to the reality of things. Much like with writing a business email or organizing a calendar: If a computer program can do your entire job for you… maybe shut the fuck up about that program? Chatgpt et al aren’t meant to replace the senior or principle software engineer who is in lots of design meetings or optimizing the critical path of your corporate secret sauce.

          It is replacing junior engineers and interns (which is gonna REALLY hurt in ten years but…). Chatgpt hallucinated a nonsense function? That is what CI testing and code review is for. Same as if that intern forgot to commit a file or that rockstar from facebook never ran the test suite.

          Of course, the problem there is that the internet is chock full of “rock star coders” who just insist the world would be a better place if they never had to talk to anyone and were always given perfectly formed tickets so they could just put their headphones on and work and ignore Sophie’s birthday and never be bothered by someone asking them for help (because, trust me, you ALWAYS want to talk to That Guy about… anything). And they don’t realize that they were never actually hot shit and were mostly always doing entry level work.

          Personally? I only trust AI to directly write my code for me if it is in an airgapped environment because I will never trust black box code I pulled off the internet to touch corporate data. But I will 100% use it in place of google to get an example of how to do something that I can use for a utility function or adapt to solving my real problem. And, regardless, I will review and test that just as thoroughly as the code Fred in accounting’s son wrote because I am the one staying late if we break production.


          And just to add on, here is what I told a friend’s kid who is an undergrad comp sci:

          LLMs are awesome tools. But if the only thing you bring to the table is that you can translate the tickets I assigned to you to a query to chatgpt? Why am I paying you? Why am I not expensing a prompt engineering course on udemy and doing it myself?

          Right now? Finding a job is hard but there are a lot of people like me who understand we still need to hire entry level coders to make sure we have staff ready to replace attrition over the next decade (or even five years). But I can only hire so many people and we aren’t a charity: If you can’t do your job we will drop you the moment we get told to trim our budget.

          So use LLMs because they are an incredibly useful tool. But also get involved in design and planning as quickly as possible. You don’t want to be the person writing the prompts. You want to be the person figuring out what prompts we need to write.

          • EldritchFeminity@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 days ago

            In short, AI is useful when it’s improving workflow efficiency and not much else beyond that. People just unfortunately see it as a replacement for the worker entirely.

            If you wanna get loose with your definition of “AI,” you can go all the way back to the MS Paint magic wand tool for art. It’s simply an algorithm for identifying pixels within a certain color tolerance of each other.

            The issue has never been the tool itself, just the way that it’s made and/or how companies intend to use it.

            Companies want to replace their entire software division, senior engineers included, with ChatGPT or equivalent because it’s cheaper, and they don’t value the skill of their employees at all. They don’t care how often it’s wrong, or how much more work the people that they didn’t replace have to do to fix what the AI breaks, so long as it’s “good enough.”

            It’s the same in art. By the time somebody is working as an artist, they’re essentially at a senior software engineer level of technical knowledge and experience. But society doesn’t value that skill at all, and has tried to replace it with what is essentially a coding tool trained on code sourced from pirated software and sold on the cheap. A new market of cheap knockoffs on demand.

            There’s a great story I heard from somebody who works at a movie studio where they tried hiring AI prompters for their art department. At first, things were great. The senior artist could ask the team for concept art of a forest, and the prompters would come back the next day with 15 different pictures of forests while your regular artists might have that many at the end of the week. However, if you said, “I like this one, but give me some versions without the people in them,” they’d come back the next day with 15 new pictures of forests, but not the original without the people. They simply could not iterate, only generate new images. They didn’t have any of the technical knowledge required to do the job because they depended completely on the AI to do it for them. Needless to say, the studio has put a ban on hiring AI prompters.

      • tetris11@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        2 days ago

        Perplexity is kinda half-decent with showing its sources, and I do rely on it a lot to get me 50% of the way there, at which point I jump into the suggested sources, do some of my own thinking, and do the other 50% myself.

        It’s been pretty useful to me so far.

        I’ve realised I don’t want complete answers to anything really. Give me a roundabout gist or template, and then tell me where to look for more if I’m interested.

      • tauren@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        edit-2
        1 day ago

        For real. If a human performs task X with 80% accuracy, an AI needs to perform the same task with 80.1% accuracy to be a better choice - not 100%. Furthermore, we should consider how much time it would take for a human to perform the task versus an AI. That difference can justify the loss of accuracy. It all depends on the problem you’re trying to solve. With that said, it feels like AI on mobile devices hardly solves any problems.