• 6 Posts
  • 30 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle








  • 10% to 80% seems like too wide a range for your range of “how many are on the largest instance” 10% means only 1 in ten users are on the largest instance and 9/10 are spread out on the rest, If anything that seems overly fragmented. On the other end 80% means 4/5 users are on the largest instance and 1/5 are shared between all other instances which is incredibly concentrated.

    I’d sugest narrowing the range to 20% to 66%, 1 in 5 on the largest instance is still plenty dispersed to ensure that there is competition/variety and 2 in 3 users on the largest instance is already well into monopoly territory.









  • Womble@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    2 months ago

    Not the parent, but LLMs dont solve anything, they allow more work with less effort expended in some spaces. Just as horse drawn plough didnt solve any problem that couldnt be solved by people tilling the earth by hand.

    As an example my partner is an academic, the first step on working on a project is often doing a literature search of existing publications. This can be a long process and even more so if you are moving outside of your typical field into something adjacent (you have to learn what excatly you are looking for). I tried setting up a local hosted LLM powered research tool that you can ask it a question and it goes away, searches arxiv for relevant papers, refines its search query based on the abstracts it got back and iterates. At the end you get summaries of what it thinks is the current SotA for the asked question along with a list of links to papers that it thinks are relevant.

    Its not perfect as you’d expect but it turns a minute typing out a well thought question into hours worth of head start into getting into the research surrounding your question (and does it all without sending any data to OpenAI et al). That getting you over the initial hump of not knowing exactly where to start is where I see a lot of the value of LLMs.