

If you don’t care about metal in the wood, the you could use a pair of diagonal cutters to snip them flush with the wood rather than try to extract them.
If you don’t care about metal in the wood, the you could use a pair of diagonal cutters to snip them flush with the wood rather than try to extract them.
I’ve noticed more and more people taking sooo much stuff with them on board too. Like they think they are pioneers and need a covered wagons worth of provisions to weather the trip from ATL to LAX.
I suppose some of that can be blamed on the airlines for steep baggage fees but holy crap do people try and take way too much junk with them everywhere they go. So they all take 10 min to unpack.
The post title makes it sound like Reddit is doing some sort of automated classification of user politics with some sort of ml technique. But the screenshot does not show that. It shows an llm summary of a users posting history . If the tool was run on a user that posted exclusively to a cat subreddit, the summary would have been about how the user likes cats. Despite the utility or accuracy of llm summaries, what the screenshot shows is far more anodyne than what this post’s title implies is happening.
The screenshot shows an llm summary of a users posting history. Is that what you mean by “determine belief values stance and more” ? Is there more to this? How is that summary different from scrolling through someone’s posting history to see what they post about?
PhD Level expertise:
I was referring to the story that implied the rate of loan defaults was rising among households earning over 150K, but the data showed a default rate that increased from something like 0.17% to 0.36% of all households in that category, didn’t describe the variance around that rate, and didn’t describe the reliability of the administrative records from which the rate was calculated–two factors that will dominate percentage fluctuation at values that infinitesimally small.
If you go into the comment sections where that story was posted, you will see people talking about how America forces even middle class people to spend lavishly beyond their needs, or how people in this class are irresponsible with money, or how impossible it is to live in HCOL cities, or how wealthy people are stealing everything, or how corporations are stealing everything. Few people really questioned the plausibility of the story’s framing.
Recently there was a news story about how people earning 150k were struggling financially. Even just reading the article was enough to know the idea was bullshit (which is probably why the headline used such mealy-mouthed language). But that did not stop a bunch of users from prognosticating about how terrible the economy is and how we are on the verge of collapse.
The idea that households earning more than 150k are struggling is objectively wrong. They are not. But that idea is consistent with the political sentiments of users here ( billionaires vs everyone else in a zero sum economy ) so it gets traction.
People pass around trash sources like the new republic which often just copies other news outlets but reframes stories to be consistent with lefty sentiments about whatever current events are going on.
In one community I encountered an image macro criticizing a judge for making a ruling against some plaintiffs suing Trump that was completely divorced from any context, making it appear the judge was in the tank for trump when, if you knew even a little about her, or the ruling you would immediately recognize that idea as bullshit.
Those are just a few examples off the top of my head
Today on the train-ride into work there was a group of what looked to be men in their mid 50s joking about who in their friend group was “alpha”, “beta” or “sigma”. I think that if, at this point in history, even a near-retirement age group of commuters is using the lingo then internet culture is just culture.
I did have to resist the urge to tell them to touch grass.
In conclusion, putting the internet on everyone’s phones was a huge mistake and we should go back to the way it was in the before-times.
Andrew German wrote about this. From his blog post I got the impression that this issue is mostly impacting compsci. Maybe it’s more widespread than that field, but my experience with compsci research is that a lot more emphasis is placed on conferences compared to journals and the general vibe I got from working with compsci folks was that volume mattered a lot more than quality when it came to publication. So maybe those quirks of the field left them more vulnerable to ai slop in the review process.
How can this be projection? Silicon Valley’s culture brought us fraudulent medical tech, cryptocurrency scams, industrialized wire fraud, illegal taxis, and DRM juice machines. A paragon of honesty and integrity if I ever saw one.
Before you buy one, look up how much replacement parts cost for whatever new machine you are considering. I had to get a new one a few years back because the filter in my old one kept getting clogged, and could not be replaced. You had to replace a larger part that cost almost 200 dollars.
A long time ago I did some volunteer work for a companion bird sanctuary, and the number of people who got a bird as a pet and were totally unprepared for the care required was astounding. Almost all the birds at that sanctuary had some sort of serious behavioral issue because the people who got them just could not keep them cared for. You should probably talk to someone with experience keeping birds before making a decision because experience can be terrible for the bird if you are not ready.
He also has good presentation skills. Well worth the watch
What does dystopia mean to you?
In this particular case, the things I find dystopian are the tendency of a disconcertingly high number of people to allow a tech company to mediate (and eventually monetize) every aspect of their social lives. The point I was making is that if this tool were to experience widespread adoption, even putting aside the massive surveillance and manipulation issues, what will inevitably happen is that a subset of people will come to rely on the tool to the point where they cannot interact with others outside of it. That is bad. Its bad because, it takes a fundamental human experience and locks it behind a pay wall. It is also bad because the sort of interactions that this tool could facilitate are going to be, by their nature, superficial. You simply cannot have meaningful interactions with someone else if you are relying on a crib sheet to navigate an interaction with them.
This tool would inevitably lead to the atrophy of social skills. In the same way that overusing a calculator causes arithmetic skills to atrophy, and in the same way that overusing a GPS causes spatial reasoning skill to atrophy. But in this case it is worse, because this tool would be contributing to the further isolation of people who, judging by the excuses offered in this thread, are already bad at social interactions. People are already lonely and apparently social media is contributing to that trend allowing it to come between you and personal interactions in the face to face world is not going to help.
This is akin to having sticky notes to remember things, just in a more compact convenient application.
I really disagree with this analogy. It would be more appropriate to say that this is like carrying around a stack of index cards with notes about people in your life and pulling them out every time you interact with someone. If someone in my life needed an index card to interact with me, I would find that insulting, because it is insincere and dehumanizing. It communicates to others "I don’t care enough about you to bother to learn even basic information about who you are.
The problem isn’t the technology, it’s the application
I really cannot stand this bromide. We are talking about a company with a track record of using technology to abuse people. They facilitated a genocide (by incompetence, but they clearly did not give a shit). They prey on people when they feel bad. They researched ways to make people feel bad (so they will be easier to manipulate). They design their tools to be addictive and then manipulate and abuse people on their platform. Saying "technology is neutral is the least interesting thing you can say about tech in the context of the current trends of silicon valley. A place whose thought leaders and influencers are becoming ever more obsessed with manipulation, control and fascism. We don’t need to speculate about technology, we already know the applications of this technology won’t be neutral. They will be used to harm people for profit.
A tool that keeps track of people in your life and gives you small talk cues seems dystopian in its self. Relying on that you would just further isolate yourself from others.
Thinking about it, I am pretty sure I would immediately despise anyone who used this tool on me, even apart from the fact that they would be putting me into a meta database without my consent. I would despise people who use this tool for the same reason I despise people who crudely implement the strategies from “How to win friends and influence people”. Their interactions are insincere and manipulative.
The average of all the serious guesses in this thread.
This has largely been my experience as well. I work as a statistician and it seems like the folks who arrived at data science through a CS background are less equipped to think through data analysis. Though I suppose to be fair, their coding skills are better than mine. But if OP wants to do data journalism, of the sort Pro Publica is gearing up for, then a stats background would be better.
Probably statistics. A lot of journalists seem to struggle with stats so that could give you an advantage. You can pick up a lot of programming skills in a stats program. You can even lean into statistical programming if you want. I think you’d have to seek out the more advanced programming side of a statistical degree but it is there and I think stats is harder to learn than the coding skills you need for data science.
They are hard to read because they are written to explain concepts to people who already understand them. Handy if you just need them for reference. Useless if you are trying to learn. Which is why RTFM is often bad advice
AI doomsday marketing wank has the same vibe as preteens at a sleepover getting spooked by a ouija board.