

I hope you dont play video games or stream HD video, given that they use more electricity for less social benefit than this would.
I hope you dont play video games or stream HD video, given that they use more electricity for less social benefit than this would.
And unless you are Stephan King or the like exactly how are you going to get the publishing cartel (I think they re consolidated downs to 3-4 publishers now) to change their contract to not include this? Their response will almost certainly be either “that’s non-negotiable” or “ok then you get half as much money”.
As much as you can hold a computer manufacturer responsible for buggy software.
When I did my undergrad the core modules had upwards of 400 people in them, never had a single multiple choice test in my entire degree. Thats a choice not a neccessity.
He could see AI being used more immediately to address certain “low-hanging fruit,” such as checking for application completeness. “Something as trivial as that could expedite the return of feedback to the submitters based on things that need to be addressed to make the application complete,” he says. More sophisticated uses would need to be developed, tested, and proved out.
Oh no, the dystopian horror…
Its a shit article with Tech crunch changing the words to get people in a flap about AI (for or against), the actual quote is
“I’d say maybe 20 percent, 30 percent of the code that is inside of our repos today and some of our projects are probably all written by software”
“Written by software” reasonably included machine refactored code, automatically generated boilerplate and things generated by AI assistants. Through that lens 20% doesnt seem crazy.
Git is, but it has no process of discovery or hosting by itself. Those are needed to efficiently share open source software to large numbers of people.
10% to 80% seems like too wide a range for your range of “how many are on the largest instance” 10% means only 1 in ten users are on the largest instance and 9/10 are spread out on the rest, If anything that seems overly fragmented. On the other end 80% means 4/5 users are on the largest instance and 1/5 are shared between all other instances which is incredibly concentrated.
I’d sugest narrowing the range to 20% to 66%, 1 in 5 on the largest instance is still plenty dispersed to ensure that there is competition/variety and 2 in 3 users on the largest instance is already well into monopoly territory.
As per the article:
It uses high frequency radio waves to disrupt or damage critical electronic components inside drones, causing them to crash or malfunction.
Its not jamming the comms, its inducing currents inside the electronics of the drone to fry them.
I don’t think that’s really a fair comparison, babies exist with images and sounds for over a year before they begin to learn language, so it would make sense that they begin to understand the world in non-linguistic terms and then apply language to that. LLMs only exist in relation to language so couldnt understand a concept separately to language, it would be like asking a person to conceptualise radio waves prior to having heard about them.
Probably, given that LLMs only exist in the domain of language, still interesting that they seem to have a “conceptual” systems that is commonly shared between languages.
Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.
Interesting that…
Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”
It does not, unless you run weights that someone else has modified to remove the baked in censorship. If you run the unmodified weights released by deepseek it will refuse to answer most things that the CCP dont like being discussed.
Not the parent, but LLMs dont solve anything, they allow more work with less effort expended in some spaces. Just as horse drawn plough didnt solve any problem that couldnt be solved by people tilling the earth by hand.
As an example my partner is an academic, the first step on working on a project is often doing a literature search of existing publications. This can be a long process and even more so if you are moving outside of your typical field into something adjacent (you have to learn what excatly you are looking for). I tried setting up a local hosted LLM powered research tool that you can ask it a question and it goes away, searches arxiv for relevant papers, refines its search query based on the abstracts it got back and iterates. At the end you get summaries of what it thinks is the current SotA for the asked question along with a list of links to papers that it thinks are relevant.
Its not perfect as you’d expect but it turns a minute typing out a well thought question into hours worth of head start into getting into the research surrounding your question (and does it all without sending any data to OpenAI et al). That getting you over the initial hump of not knowing exactly where to start is where I see a lot of the value of LLMs.
Look, Reddit bad, AI bad. Engaging with anything more that the most surface level reactions is hard so why bother?
At a recent conference in Qatar, he said AI could even “unlock” a system where people use “sliders” to “choose their level of tolerance” about certain topics on social media.
That combined with a level of human review for people who feel they have been unfairly auto-moderated seems entirely reasonable to me.
Where are you getting that from? Its not on the linked firefox terms of use, or on the linked mozzilla account terms of service.
I just know people are going to flock to my novel that I manually typewritered each copy myself.
So by going harder on blocking content that China? Because that’s what they do but most of the big providers get through after a day or two of downtime each time the government make a change to block them.
Let me try with another example that can get round your blind AI hatred.
If people were using a calculator to calculate the value of an integral they would have significantly less diversity of results because they were all using the same tool. Less diversity of results has nothing to do with how good the tool is, it might be 100% right or 100% wrong but if everyone is using it then they will all get the same (or similar if it has a random element to it as LLMs do).
Its a godsend when you have to use Windows for whatever reason and you can have a functional OS to do things with.