

You mean that preventing people from being able to purchase the things they need to purchase in order to do their job will slow down their progress? Who could have possible seen that coming?
You mean that preventing people from being able to purchase the things they need to purchase in order to do their job will slow down their progress? Who could have possible seen that coming?
Does ZFS allow for easy snapshotting like btrfs?
Absolutely
edit a filename while the file is open
Any Linux filesystem will do that
it gives people the option to use an alternate app store if they want but it doesn’t force anyone to.
That argument sounds great in theory, but would break down after a month or less, when companies start moving their apps off of Apple’s App Store and onto a 3rd party store that allows all the spyware Apple has forced them to remove if they want to have an iOS market. This move DOES force people to use alternate app stores when companies start moving (not copying, moving) their apps over to said stores to take advantage of the drop in oversight.
Same, I don’t let Docker manage volumes for anything. If I need it to be persistent I bind mount it to a subdirectory of the container itself. It makes backups so much easier as well since you can just stop all containers, backup everything in ~/docker or wherever you put all of your compose files and volumes, and then restart them all.
It also means you can go hog wild with docker system prune -af --volumes
and there’s no risk of losing any of your data.
Mint is basically Ubuntu with all of Canonical’s BS removed. This definitely counts as Canonical BS, so I’d be surprised if it made its way into Mint.
I would separate the media and the Jellyfin image into different pools. Media would be a normal ZFS pool full of media files that gets mounted into any VM that needs it, like Jellyfin, sonarr, radarr, qbittorrent, etc. (preferably read-only mounted in Jellyfin if you’re going to expose Jellyfin to the internet).
Elon is really unpopular right now and “regular” republicans feel like he and DOGE are attacking them.
Only those who have been personally harmed. The rest of them, and that’s the vast majority, actually believe Elon is cleaning up the government and finding/eliminating corruption. I work with one, he’s convinced everything DOGE has found and cut is corruption/fraud, the kids who are rooting through the Treasury are geniuses capable of finding things that no other administration has been able to (or been willing to?) find, and ultimately the Government is going to run more efficiently and taxes can be lower for regular people when it’s all said and done. They’re too deep in the rabbit hole to see the light.
Best luck I’ve had with laptops has been Razer, actually. They’re gaming laptops, so a bit warm and loud and the battery life isn’t great, but they’re built like a brick, can be easily opened, all parts are easily replaceable/upgradeable, and since they generally use Intel everything, Linux compatibility is solid as well (except for RGB lighting and stuff, but with OpenRazer and Polychromatic even that usually works except for brand new models).
My last laptop was a Razer Blade 14 which ran great for like 6 years before I just got bored and decided I wanted to upgrade to a newer model with a better display. Over the 6 years I used it I upgraded the RAM, SSD, added a second SSD, upgraded the WiFi card, etc. It ran literally 24/7 during that entire time other than brief moments when I shut it down to throw in a backpack for travel, the only thing I had to replace for maintenance was the battery. I now have a Razer Blade 16 which has been great for the last year, zero issues, also running 24/7.
Before Razer I used Dell, Lenovo, HP, and Asus. None of them lasted more than 2-3 years before either the plastic crap holding it together fell apart, or the monitor, mouse, or keyboard failed, or I wanted/needed to upgrade something that was not user-replaceable (usually RAM or WiFi).
Would you mind if I added this as a discussion (crediting you and this post!) in the github project?
Yeah that would be fine
They didn’t provide an rsync example until later in the post, the comment about not supporting differential backups is in reference to using rsync itself, which is incorrect, because rsync does support differential backups.
I agree with you that not doing differential backups is a problem, I’m simply commenting that this is not a drawback of using rsync, it’s an implementation problem on the user’s part. It would be like somebody saying “I like my Rav4, it’s just problematic because I don’t go to the grocery store with it” and someone else saying “that’s a big drawback, the grocery store has a lot of important items and you need to be able to go to it”. While true, it’s based on a faulty premise, because of course a Rav4 can go to the grocery store like any other car, it’s a non-issue to begin with. OP just needs to fix their backup script to start doing differential backups.
My KVM hosts use “virsh backup begin” to make full backups nightly.
All machines, including the KVM hosts and laptops, use rsync with --link-dest to create daily incremental versioned backups on my main backup server.
The main backup server pushes client-side encrypted backups which include the latest daily snapshot for every system to rsync.net via Borg.
I also have 2 DASs with 2 22TB encrypted drives in each. One of these is plugged into the backup server while the other one sits powered off in a drawer in my desk at work. The main backup server pushes all backups to this DAS weekly and I swap the two DASs ~monthly so the one in my desk at work is never more than a month or so out of date.
It’s not a drawback because rsync has supported incremental versioned backups for over a decade, you just have to use the --link-dest flag and add a couple lines of code around it for management.
Sure, it’s a bit hack-and-slash, but not too bad. Honestly the dockcheck portion is already pretty complete, I’m not sure what all you could add to improve it. The custom plugin I’m using does nothing more than dump the array of container names with available updates to a comma-separated list in a file. In addition to that I also have a wrapper for dockcheck which does two things:
Basically there are 5 steps to the setup:
{
"metrics-addr": "127.0.0.1:9323"
}
Once running, you should be able to run curl http://localhost:9323/metrics
and see a dump of Prometheus metrics
send_notification() {
Updates=("$@")
UpdToString=$(printf ", %s" "${Updates[@]}")
UpdToString=${UpdToString:2}
File=updatelist_local.txt
echo -n $UpdToString > $File
}
#!/bin/bash
cd $(dirname $0)
./dockcheck/dockcheck.sh -mni
if [[ -f updatelist_local.txt ]]; then
mv updatelist_local.txt updatelist.txt
else
echo -n "None" > updatelist.txt
fi
At this point you should be able to run your script, and at the end you’ll have the file “updatelist.txt” which will either contain a comma-separated list of all containers with available updates, or “None” if there are none. Add this script into cron to run on whatever cadence you want, I use 4 hours.
#!/usr/bin/python3
from flask import Flask, jsonify
import os
import time
import requests
import json
app = Flask(__name__)
# Listen addresses for docker metrics
dockerurls = ['http://127.0.0.1:9323/metrics']
# Other dockerstats servers
staturls = []
# File containing list of pending updates
updatefile = '/path/to/updatelist.txt'
@app.route('/metrics', methods=['GET'])
def get_tasks():
running = 0
stopped = 0
updates = ""
for url in dockerurls:
response = requests.get(url)
if (response.status_code == 200):
for line in response.text.split("\n"):
if 'engine_daemon_container_states_containers{state="running"}' in line:
running += int(line.split()[1])
if 'engine_daemon_container_states_containers{state="paused"}' in line:
stopped += int(line.split()[1])
if 'engine_daemon_container_states_containers{state="stopped"}' in line:
stopped += int(line.split()[1])
for url in staturls:
response = requests.get(url)
if (response.status_code == 200):
apidata = response.json()
running += int(apidata['results']['running'])
stopped += int(apidata['results']['stopped'])
if (apidata['results']['updates'] != "None"):
updates += ", " + apidata['results']['updates']
if (os.path.isfile(updatefile)):
st = os.stat(updatefile)
age = (time.time() - st.st_mtime)
if (age < 86400):
f = open(updatefile, "r")
temp = f.readline()
if (temp != "None"):
updates += ", " + temp
else:
updates += ", Error"
else:
updates += ", Error"
if not updates:
updates = "None"
else:
updates = updates[2:]
status = {
'running': running,
'stopped': stopped,
'updates': updates
}
return jsonify({'results': status})
if __name__ == '__main__':
app.run(host='0.0.0.0')
The neat thing about this program is it’s nestable, meaning if you run steps 1-4 independently on all of your Docker servers (assuming you have more than one), then you can pick one of the machines to be the “master” and update the “staturls” variable to point to the other ones, allowing it to collect all of the data from other copies of itself into its own output. If the output of this program will only need to be accessed from localhost, you can change the host variable in app.run to 127.0.0.1 to lock it down. Once this is running, you should be able to run curl http://localhost:5000/metrics
and see the running and stopped container counts and available updates for the current machine and any other machines you’ve added into “staturls”. You can then turn this program into a service or launch it @reboot in cron or in /etc/rc.local, whatever fits with your management style to start it up on boot. Note that it does verify the age of the updatelist.txt file before using it, if it’s more than a day old it likely means something is wrong with the dockcheck wrapper script or similar, and rather than using the output the REST API will print “Error” to let you know something is wrong.
widget:
type: customapi
url: http://localhost:5000/metrics
refreshInterval: 2000
display: list
mappings:
- field:
results: running
label: Running
format: number
- field:
results: stopped
label: Stopped
format: number
- field:
results: updates
label: Updates
Anything on a separate disk can be simply remounted after reinstalling the OS. It doesn’t have to be a NAS, DAS, RAID enclosure, or anything else that’s external to the machine unless you want it to be. Actually it looks like that Beelink only supports a single NVMe disk and doesn’t have SATA, so I guess it does have to be external to the machine, but for different reasons than you’re alluding to.
This is a great tool, thanks for the continued support.
Personally, I don’t actually use dockcheck to perform updates, I only use it for its update check functionality, along with a custom plugin which, in cooperation with a python script of mine, serves a REST API that lists all containers on all of my systems with available updates. That then gets pulled into homepage using their custom API function to make something like this: https://imgur.com/a/tAaJ6xf
So at a glance I can see any containers that have updates available, then I can hop into Dockge to actually apply them on my own schedule.
It wouldn’t matter. The public doesn’t listen directly to politicians, it gets filtered through the media first, and the media picks and chooses which parts they actually report. The people who would actually hear this already know. The people who would need to hear it never will because Fox won’t show it to them.