I recently replaced an ancient laptop with a slightly less ancient one.
- host for backups for three other machines
- serve files I don’t necessarily need on the new machine
- relatively lightweight - “server” is ~15 years old
- relatively simple - I’d rather not manage a dozen docker containers.
- internal-facing
- does NOT need to handle Android and friends. I can use sync-thing for that if I need to.
Left to my own devices I’d probably rsync for 90% of that, but I’d like to try something a little more pointy-clicky or at least transparent in my dotage.
Edit: Not SAMBA (I freaking hate trying to make that work)
Edit2: for the young’uns: NFS (linux “network filesystem”)
Edit 3: LAN only. I may set up a VPN connection one day but it’s not currently a priority. (edited post to reflect questions)
Last Edit: thanks, friends, for this discussion! I think based on this I’ll at least start with NFS + my existing backups system (Mint’s thing, which is I think just a gui in front of rcync). May play w/ modern SAMBA if I have extra time.
Ill continue to read the replies though - some interesting ideas.
NFS is pretty good
NFS is still the standard. Were slowly seeing better adoption of VFS for things like hypervisors.
Otherwise something like SFTPgo or Copyparty if you want a solution that supports pretty much every protocol.
I would say SMB is more the standard. It is natively supported in Linux and works a bit better for file shares.
NFS is better for server style workloads
If you already know NFS and it works for you, why change it? As long as you’re keeping it between Linux machines on the LAN, I see nothing wrong with NFS.
Isn’t nfs pretty much completely insecure unless you turn on nfs4 with Kerberos? The fact that that is such a pain in the ass is what keeps me from it. It is fine for read-only though.
I think a reasonable quorum already said this, but NFS is still good. My only complaint is it isn’t quite as user-mountable as some other systems.
So…I know you said no SAMBA, but SAMBA 4 really isn’t bad any more. At least, not nearly as shit as it was.
If you want a easily mountable filesystem for users (e.g. network discovery/etc.) it’s pretty tolerable.
I still use sshfs. I can’t be bothered to set up anything else I just want something that works out of the box.
Isn’t that super clunky ? I keep getting all kind of sluggishness, hangs and the occasional error every time I use that. It ends up working but wow, does it suck.
I mostly use samba / cifs clients and it’s fast and reliable with properly setup dns and using only the dns or IP address, not smbios or active directory those are overkill
I like the sound of that!
However it looks like has a lot of potential for a ‘xz’ style exploit injection, so I’ll probably skip it.
From the project’s README.md : The current maintainer continues to apply pull requests and makes regular releases, but unfortunately has no capacity to do any development beyond addressing high-impact issues. When reporting bugs, please understand that unless you are including a pull request or are reporting a critical issue, you will probably not get a response.
I am 100% open to exploring other equally zero effort alternatives if only I had the time CURSE being an adult (ノಠ益ಠ)ノ . Is there anything better I should use, hopefully using existing ssh keys please.
NFS is still useful. We use it in production systems now. If it ain’t broke, don’t fix it.
And if you have a dedicated system for this, I’d look into TrueNAS Scale.
Samba or some sort of cloud like sync system like Sync thing or Nextcloud
I use a samba mount behind a VPN.
You should take a look at webDAV
I’d use an s3 bucket with s3fs. Since you want to host it yourself, Minio is the open-source tool to use instead of s3.
I hear good things about seaweedfs instead of minio these days
Oh, and if you want to use it as the backing store for a database consider obstore instead of s3fs: https://developmentseed.org/blog/2025-08-01-obstore/
For smaller folders I like using syncthing, that way it’s like having multiple updated backups
Syncthing is neat, but you shouldn’t consider it to be a backup solution. If you accidentally delete or modify a file on one machine, it’ll happily propagate that change to all other machines.
You can turn off “delete”, but modification is a danger, it’s true.
Turning off delete makes it excellent for eg. backing up photographs on your phone. I’ve got it doing this from my Android to my raspberry pi, which puts them on my NAS for me. Saves losing all my pictures if I lose my phone.
I like this solution because I can have the need filled without a central server. I use old-fashioned offline backups for my low-churn, bulk data, and SyncThing for everything else to be eventually consistent everywhere.
If my data was big enough so as to require dedicated storage though, I’d probably go with TrueNAS.
If it’s for backup, zfs and btrfs can send incremental diffs quite efficiently (but of course you’ll have to use those on both ends).
Otherwise, both NFS and SMB are certainly viable.
I tried both but TBH I ended up just using SSHFS because I don’t care about becoming and NFS/SMB admin.
NFS and SMB are easy enough to setup, but then when you try to do user-level authentication… they aren’t as easy anymore.
Since I’m already managing SSH keys all over my machines, I feel like SSHFS makes much more sense for me.
I think ZFS send/receive requires root which can be an issue for security
Stick with NFS, and use e.g. rsync for backup. Or subversion, if you want to be super-safe.
NFS is really good inside a LAN, just use 4.x (preferably 4.2) which is quite a bit better than 2.x/3.x. It makes file sharing super easy, does good caching and efficient sync. I use it for almost all of my Docker and Kubernetes clusters to allow files to be hosted on a NAS and sync the files among the cluster. NFS is great at keeping servers on a LAN or tight WAN in sync in near real time.
What it isn’t is a backup system or a periodic sync application and it’s often when people try to use it that way that they get frustrated. It isn’t going to be as efficient in the cloud if the servers are widely spaced across the internet. Sync things to a central location like a NAS with NFS and then backups or syncs across wider WANs and the internet should be done with other tech that is better with periodic, larger, slower transactions for applications that can tolerate being out of sync for short periods.
The only real problem I often see in the real world is Windows and Samba (sometimes referred to as CIFS) shares trying to sync the same files as NFS shares because Windows doesn’t support NFS out of the box and so file locking doesn’t work properly. Samba/CIFS has some advantages like user authentication tied to active directory out of the box as well as working out of the box on Windows (although older windows doesn’t support versions of Samba that are secure), so if I need to give a user access to log into a share from within a LAN (or over VPN) from any device to manually pull files, I use that instead. But for my own machines I just set up NFS clients to sync.
One caveat is if you’re using this for workstations or other devices that frequently reboot and/or need to be used offline from the LAN. Either don’t mount the shares on boot, or take the time to set it up properly. By default I see a lot of people get frustrated that it takes a long time to boot because the mount is set as a prerequisite for completing the boot with the way some guides tell you to set it up. It’s not an NFS issue; it’s more of a grub and systemd (or most equivalents) being a pain to configure properly and boot systems making the default assumption that a mount that’s configured on boot is necessary for the boot to complete.
Thanks for that caveat. I could definitely see myself falling into that
Yeah, it’s easy enough to configure it properly, I have it set up on all of my servers and my laptop to treat it as a network mount, not a local one, and to try to connect on boot, but not require it. But it took me a while to understand what it was doing to even look for a solution. So, hopefully that saves you time. 🙂
truenas is cool. I’ve only used core so far, but i hear scale is taking over
this looks promising. Seems a little heavy-weight at first glance… How was it to get up and running?
the GUI makes it pretty painless. it was my first real attempt at self hosting anything, my first experience with any kind of NFS/SMB setup at all. i was running it as bare metal for around 2 years before using installing as a vm on proxmox.
LAN or internet?
Https is king for internet protocols.
LAN only. I may set up a VPN connection one day but it’s not currently a priority. (edited post to reflect)
NFS works, but http was designed for shitty internet. Keep that in mind. Owncloud or similar might be a good idea.