Not the parent, but LLMs dont solve anything, they allow more work with less effort expended in some spaces. Just as horse drawn plough didnt solve any problem that couldnt be solved by people tilling the earth by hand.
As an example my partner is an academic, the first step on working on a project is often doing a literature search of existing publications. This can be a long process and even more so if you are moving outside of your typical field into something adjacent (you have to learn what excatly you are looking for). I tried setting up a local hosted LLM powered research tool that you can ask it a question and it goes away, searches arxiv for relevant papers, refines its search query based on the abstracts it got back and iterates. At the end you get summaries of what it thinks is the current SotA for the asked question along with a list of links to papers that it thinks are relevant.
Its not perfect as you’d expect but it turns a minute typing out a well thought question into hours worth of head start into getting into the research surrounding your question (and does it all without sending any data to OpenAI et al). That getting you over the initial hump of not knowing exactly where to start is where I see a lot of the value of LLMs.
Look, Reddit bad, AI bad. Engaging with anything more that the most surface level reactions is hard so why bother?
At a recent conference in Qatar, he said AI could even “unlock” a system where people use “sliders” to “choose their level of tolerance” about certain topics on social media.
That combined with a level of human review for people who feel they have been unfairly auto-moderated seems entirely reasonable to me.
Where are you getting that from? Its not on the linked firefox terms of use, or on the linked mozzilla account terms of service.
I just know people are going to flock to my novel that I manually typewritered each copy myself.
So by going harder on blocking content that China? Because that’s what they do but most of the big providers get through after a day or two of downtime each time the government make a change to block them.
Let me try with another example that can get round your blind AI hatred.
If people were using a calculator to calculate the value of an integral they would have significantly less diversity of results because they were all using the same tool. Less diversity of results has nothing to do with how good the tool is, it might be 100% right or 100% wrong but if everyone is using it then they will all get the same (or similar if it has a random element to it as LLMs do).
That snark doesnt help anyone.
Imagine the AI was 100% perfect and gave the correct answer every time, people using it would have a significantly reduced diversity of results as they would always be using the same tool to get the correct same answer.
People using an ai get a smaller diversity of results is neither good nor bad its just the way things are, the same way as people using the same pack of pens use a smaller variety of colours than those who are using whatever pens they have.
Literally everyone learns from unreliable teachers, the question is just how reliable.
Ha no, I just have an extension to automatically add that to wiki links as I dislike the newer skin. I totally forgot it was there!
Its very common in all sorts of fields, Max Planck said that physics advances one dead professor at a time
If they had stuck to that I wouldnt have an issue with it, but they broaden it out to
I’m tired of calling people out again and again for dumping on PHP.
I’m tired of people dumping on Windows, that most popular operating system, because it’s not what we choose to use
I dont see critising PHP or Windows as a problem, both have serious faults. The argument put forth here conflates two things: That critising a language is bad (fine IMO), critising people for liking a language is bad (not fine). We should welcome the former while insisting the later isnt acceptable.
So should we be entirely uncritical of whichever language people choose to use because it might be percieved as offputting to someone? Would someone writing in brainfuck or whitespace or FORTRAN66 for an actual project (i.e. not just for their own interest) not be subject to critisim for that choice?
Discussion of how languages have bad features and what they could do better is how progress gets made and languages improve over time. I personally find it annoying the level of recent dumping on python that seems to be popular, but they often have a point. Those points are useful in figuring out either how to make those languages better or how the next language to be created should be. Labeling that as problematic and “actively participating in the exclusion of women from STEM” seems to me to be a huge reach.
It would be interesting to give these scores a bit of context: what level would a random person off the street, a history undergrad and a history professor score?
and then the same amount of energy is used in just burning gasoline (never mind diesel and kerosine)
Yup, you can download as many youtube videos, news articles and images as you like!
The point is that google is no longer just listing search results. For years now it has been giving the “correct” answer as well as results. This started of with things it could recognise and easily solve like calculations (“what is 432 times 548”), but has now moved into general queries powered by LLMs that have no knowledge of fact.
It does not, unless you run weights that someone else has modified to remove the baked in censorship. If you run the unmodified weights released by deepseek it will refuse to answer most things that the CCP dont like being discussed.