If they can charge 30% less without Apple’s fees, then why are their prices the same whether you buy on their iOS app or direct on their website? Why have they been overcharging users who don’t buy through the iOS app by 30% all this time?
If they can charge 30% less without Apple’s fees, then why are their prices the same whether you buy on their iOS app or direct on their website? Why have they been overcharging users who don’t buy through the iOS app by 30% all this time?
Just FYI - you’re going to spend far, FAR more time and effort reading release notes and manually upgrading containers than you will letting them run :latest and auto-update and fixing the occasional thing when it breaks. Like, it’s not even remotely close.
Pinning major versions for certain containers that need specific versions makes sense, or containers that regularly have breaking changes that require you to take steps to upgrade, or absolute mission-critical services that can’t handle a little downtime with a failed update a couple times a decade, but for everything else it’s a waste of time.
Yeah that’s about 2 and a half round-trips between Dallas and Houston, that’s…not a lot to be calling this thing ready to go and pulling out the safety drivers.
I wonder how these handle accidents, traffic stops, bad lane markings from road construction, mechanical failure, bad weather (heavy rain making it difficult/impossible to see lane markings), etc.
You’d think they would be keeping the safety drivers in place for at least 6+ months of regular long-haul drives and upwards of 100k miles to cover all bases.
I run all of my Docker containers in a VM (well, 4 different VMs, split according to network/firewall needs of the containers it runs). That VM is given about double the RAM needed for everything it runs, and enough cores that it never (or very, very rarely) is clipped. I then allow the containers to use whatever they need, unrestricted, while monitoring the overall resource utilization of the VM itself (cAdvisor + node_exporter + Promethus + Grafana + Alert Manager). If I find that the VM is creeping up on its load or memory limits, I’ll investigate which container is driving the usage and then either bump the VM limits up or address the service itself and modify its settings to drop back down.
Theoretically I could implement per-container resource limits, but I’ve never found the need. I have heard some people complain about some containers leaking memory and creeping up over time, but I have an automated backup script which stops all containers and rsyncs their mapped volumes to an incremental backup system every night, so none of my containers stay running for longer than 24 hours continuous anyway.
People always say to let the system manage memory and don’t interfere with it as it’ll always make the best decisions, but personally, on my systems, whenever it starts to move significant data into swap the system starts getting laggy, jittery, and slow to respond. Every time I try to use a system that’s been sitting idle for a bit and it feels sluggish, I go check the stats and find that, sure enough, it’s decided to move some of its memory into swap, and responsiveness doesn’t pick up until I manually empty the swap so it’s operating fully out of RAM again.
So, with that in mind, I always give systems plenty of RAM to work with and set vm.swappiness=0. Whenever I forget to do that, I will inevitably find the system is running sluggishly at some point, see that a bunch of data is sitting in swap for some reason, clear it out, set vm.swappiness=0, and then it never happens again. Other people will probably recommend differently, but that’s been my experience after ~25 years of using Linux daily.
I self-host Bitwarden, hidden behind my firewall and only accessible through a VPN. It’s perfect for me. If you’re going to expose your password manager to the internet, you might as well just use the official cloud version IMO since they’ll likely be better at monitoring logs than you will. But if you hide it behind a VPN, self-hosting can add an additional layer of security that you don’t get with the official cloud-hosted version.
Downtime isn’t an issue as clients will just cache the database. Unless your server goes down for days at a time you’ll never even notice, and even then it’ll only be an issue if you try to create or modify an entry while the server is down. Just make sure you make and maintain good backups. Every night I stop and rsync all containers (including Bitwarden) to a daily incremental backup server, as well as making nightly snapshots of the VM it lives in. I also periodically make encrypted exports of my Bitwarden vault which are synced to all devices - those are useful because they can be natively imported into KeePassXC, allowing you to access your password vault from any machine even if your entire infrastructure goes down. Note that even if you go with the cloud-hosted version, you should still be making these encrypted exports to protect against vault corruption, deletion, etc.
Sure in a decade or so it might have matured enough to have shed all these issues
That’s the point. They want to set themselves up so that when the issues are shed and it becomes a realistic product, they’re already in a place where their product can be the one that takes over the market. If you wait until a product is viable before starting on development, you’re too late.
I abandoned Google when they started throwing shopping links at the top of every search, even when searching for things that have no relevance to shopping, and they started artificially promoting scams and paid material above actual results.
Google Search was best around 10-15 years ago when their only focus was providing the best results they could (remember when you could actually click the top result and you would be taken to the most applicable page instead of some unrelated ad or scam?). Now their focus is on providing the best product possible for their actual customers (paid advertisers) even when it means trashing their own product in the process.
2-4G for swap (more if you want to hibernate), the rest for /. Only add a boot/EFI partition if needed.
Over-partitioning is a newbie mistake IMO, it usually causes way more problems than it solves.
I don’t like the fact that I could delete every copy using only the mouse and keyboard from my main PC. I want something that can’t be ransomwared and that I can’t screw up once created.
Lots of ways to get around that without having to go the route of burning a hundred blu-rays with complicated (and risky) archive splitting and merging. Just a handful of external HDDs that you “zfs send” to and cycle on some regular schedule would handle that. So buy 3 drives, backup your data to all 3 of them, then unplug 2 and put them somewhere safe (desk at work, friend or family member’s house, etc.). Continue backing up to the one you keep local for the next ~month and then rotate the drives. So at any given time you have a on-site copy that’s up-to-date, and two off-site copies that are no more than 1 and 2 months old respectively. Immune to ransomware, accidental deletion, fire, flood, etc. and super easy to maintain and restore from.
Some people move the port to a nonstandard one, but that only helps with automated scanners not determined attackers.
While true, cleaning up your logs such that you can actually see a determined attacker rather than it just getting buried in the noise is still worthwhile.
Yes at a cursory glance that’s true. AI generated images don’t involve the abuse of children, that’s great. The problem is what the follow-on effects of this is. What’s to stop actual child abusers from just photoshopping a 6th finger onto their images and then claiming that it’s AI generated?
AI image generation is getting absurdly good now, nearly indistinguishable from actual pictures. By the end of the year I suspect they will be truly indistinguishable. When that happens, how do you tell which images are AI generated and which are real? How do you know who is peddling real CP and who isn’t if AI-generated CP is legal?
10 people buy tickets for $1 each. The total is $10. Winner gets $5, and the historical society gets $5.
It wouldn’t matter. The public doesn’t listen directly to politicians, it gets filtered through the media first, and the media picks and chooses which parts they actually report. The people who would actually hear this already know. The people who would need to hear it never will because Fox won’t show it to them.
You mean that preventing people from being able to purchase the things they need to purchase in order to do their job will slow down their progress? Who could have possible seen that coming?
Does ZFS allow for easy snapshotting like btrfs?
Absolutely
edit a filename while the file is open
Any Linux filesystem will do that
it gives people the option to use an alternate app store if they want but it doesn’t force anyone to.
That argument sounds great in theory, but would break down after a month or less, when companies start moving their apps off of Apple’s App Store and onto a 3rd party store that allows all the spyware Apple has forced them to remove if they want to have an iOS market. This move DOES force people to use alternate app stores when companies start moving (not copying, moving) their apps over to said stores to take advantage of the drop in oversight.
Same, I don’t let Docker manage volumes for anything. If I need it to be persistent I bind mount it to a subdirectory of the container itself. It makes backups so much easier as well since you can just stop all containers, backup everything in ~/docker or wherever you put all of your compose files and volumes, and then restart them all.
It also means you can go hog wild with docker system prune -af --volumes
and there’s no risk of losing any of your data.
Mint is basically Ubuntu with all of Canonical’s BS removed. This definitely counts as Canonical BS, so I’d be surprised if it made its way into Mint.
I’ve always wondered - and figured here is a good a place to ask as anywhere else - what’s the advantage of object storage vs just keeping your data on a normal filesystem?