tldr: I’d like to set up a reverse proxy with a domain and an SSL cert so my partner and I can access a few selfhosted services on the internet but I’m not sure what the best/safest way to do it is. Asking my partner to use tailsclae or wireguard is asking too much unfortunately. I was curious to know what you all recommend.
I have some services running on my LAN that I currently access via tailscale. Some of these services would see some benefit from being accessible on the internet (ex. Immich sharing via a link, switching over from Plex to Jellyfin without requiring my family to learn how to use a VPN, homeassistant voice stuff, etc.) but I’m kind of unsure what the best approach is. Hosting services on the internet has risk and I’d like to reduce that risk as much as possible.
-
I know a reverse proxy would be beneficial here so I can put all the services on one box and access them via subdomains but where should I host that proxy? On my LAN using a dynamic DNS service? In the cloud? If in the cloud, should I avoid a plan where you share cpu resources with other users and get a dedicated box?
-
Should I purchase a memorable domain or a domain with a random string of characters so no one could reasonably guess it? Does it matter?
-
What’s the best way to geo-restrict access? Fail2ban? Realistically, the only people that I might give access to live within a couple hundred miles of me.
-
Any other tips or info you care to share would be greatly appreciated.
-
Feel free to talk me out of it as well.
I use traefik with a wildcard domain pointing to a Tailscale IP for services I don’t want to be public. For the services I want to be publicly available I use cloudflare tunnels.
For point number 2, security through obscurity is not security.
Besides, all issued certificates are logged publicly. You can search them here https://crt.sh/Nginx Proxy Manager is easy to set up and will do LE acme certs, has a nice GUI to manage it.
If it’s just access to your stuff for people you trust, use tailscale or wireguard (or some other VPN of your choice) instead of opening ports to the wild internet.
Much less riskCaddy with cloudflare support in a docker container.
This the solution.
Caddy is simple.
I currently have a nginx docker container and certbot docker container that I have working but don’t have in production. No extra features, just a barebones reverse proxy with an ssl cert. Knowing that, I read through Caddy’s homepage but since I’ve never put an internet facing service into production, it’s not obvious to me what features I need or what I’m missing out on. Do you mind sharing what the quality of life improvements you benefit from with Caddy are?
Honestly, if you know nginx just stick with it. There’s nothing to be gained by learning a new proxy.
Use Mozilla’s SSL generator if you want to harden nginx (or any proxy you choose)- https://ssl-config.mozilla.org/
I never went too far down the nginx route, so I can’t really compare the two. I ended up with caddy because I self-host vaultwarden and it really doesn’t like running over http (for obvious reasons) and caddy was the instruction set I found and understood first.
I don’t make a lot of what I host available to the wider internet, for the ones that I do, I recently migrated to using a Cloudflare tunnel to deal with the internet at large, but still have it come through caddy once it hits my server to get ssl. For everything else I have a headscale server in Oracle’s free tier that all my internal services connect to.
or a domain with a random string of characters so no one could reasonably guess it? Does it matter?
That does not work. As soon as you get SSL certificates, expect the domain name to be public knowledge, especially with Let’s Encrypt and all other certificate authorities with transparency logs. As a general rule, don’t rely on something to be hidden from others as a security measure.
Damn, I didn’t realize they had public logs like that. Thanks for the heads up
https://crt.sh/ would make anyone who thought obscurity would be a solution poop themselves.
How do you handle SSL certs and internet access in your setup?
I have NPM running as “gateway” between my LAN and the Internet and let handle it all of my vertificates using the built-in Let’s Encrypt features. None of my hosted applications know anything about certificates in their Docker containers.
As for your questions:
- You can and should – it makes managing the applications much easier. You should use some containerization. Subdomains and correct routing will be done by the reverse proxy. You basically tell the proxy “when a request for foo.example.com comes in, forward it to myserver.local, port 12345” where 12345 is the port the container communicates over.
- 100% depends on your use case. I purchased a domain because I host stuff for external access, too. I just have my setup to report it’s external IP address to my domain provider. It basically is some dynamic DNS service but with a “real domain”. If you plan to just host for yourself and your friends, some generic subdomain from a dynamic DNS service would do the trick. (Using NPMs Let’s Encrypt configuration will work with that, too.)
- You can’t. Every georestricting can be circumvented. If you want to restrict access, use HTTP basic auth. You can set that up using NPM, too. So users authenticate against NPM and only when it was successful,m the routing to the actual content will be done.
- You might want to look into Cloudflare Tunnel to hide your real IP address and protect against DDoS attacks.
- No 🙂
“NPM” node package manager?
- Yeah I’ve been playing around with docker and a domain to see how all that worked. Got the subdomains to work and everything, just don’t have them pointing to services yet.
- I’m definitely interested in the authentication part here. Do you have an tutorials you could share?
- Will do, thanks
- ❤️
I don’t know how markdown works. that should be 1,3,4,5
Authentication with NPM is pretty straightforward. You basically just configure an ACL, add your users, and configure the proxy host to use that ACL.
I found this video explaining it: https://youtu.be/0CSvMUJEXIw?t=62
NPM unfortunately has a long-term bug since 2020, that needs you to add a specific configuration when setting up the ACL as shown in the video.
At the point where he is on the “Access” tab with all the allow and deny entries, you need to add an allow entry with
0.0.0.0/0
as IP address.Other than that, the setup shown in the video works in the most recent version.
nginx proxy manager
I was reading this and thinking node package manager too and I was both confused and concerned that somebody would sit all of their security on node package manager!
That makes much more sense 🙂
there’s so many acronyms. Thanks
Tailscale is very popular among people I know who have similar problems. Supposedly it’s pretty transparent and easy to use.
If you want to do it yourself, setting up dyndns and a wireguard node on your network (with the wireguard udp port forwarded to it) is probably the easiest path. The official wireguard vpn app is pretty good at least for android and mac, and for a linux client you can just set up the wireguard thing directly. There are pretty good tutorials for this iirc.
Some dns name pointing to your home IP might in theory be an indication to potential hackers that there’s something there, but just having an alive IP on the internet will already get you malicious scans. Wireguard doesn’t respond unless the incoming packet is properly signed so it doesn’t show up in a regular scan.
Geo-restriction might just give a false sense of security. Fail2ban is probably overkill for a single udp port. Better to invest in having automatic security upgrades on and making your internal network more zero trust
It doesn’t improve security much to host your reverse proxy outside your network, but it does hide your home IP if you care.
If your app can exploited over the web and through a proxy it doesn’t matter if that proxy is on the same machine or over the network.
I use nginx proxy manager and let’s encrypt with a porkbun domain, was very easy to set up for me. Never tried caddy/traefik/etc though. Geo blocking happens on my OPNsense with the built in tools.
Either tailscale or cloudflare tunnels are the most adapted solution as other comments said.
For tailscale, as you already set it up, just make sure you have an exit node where your services are. I had to do a bit of tinkering to make sure that the ips were resolved : its just an argument to the tailscale command.
But if you dont want to use tailscale because its to complicated to your partner, then cloudlfare tunnels is the other way to go.
How it works is by creating a tunnel between your services and cloudlare, kind of how a vpn would work. You usually use the cloudlfared CLI or directly throught Cloudflare’s website to configure the tunnel. You should have a DNS imported to cloudflare by the way, because you have to do a binding such as : service.mydns.com -> myservice.local Cloudlfare can resolve your local service and expose it to a public url.
Just so you know, cloudlfare tunnels are free for some of that usage, however cloudlfare has the keys for your ssl traffic, so they in theory could have a look at your requests.
best of luck for the setup !
Thanks for the info, I appreciate it
On my home network I have nginxproxymanager running let’s encrypt with my domain for https, currently only for vaultwarden (I’m testing it for a bit for rolling it out or migrating wholly over to https). My domain is a ######.xyz that’s cheap.
For remote access I use Tailscale. For friends and family I give them a relay [raspberry pi with nginx which proxys them over tailscale] that sits on their home network, that way they need “something they have”[the relay] and “something they know” [login credentials] to get at my stuff. I won’t implement biometrics for “something they are”. This is post hoc justification though, and nonesense to boot. I don’t want to expose a port and a VPS has low WAF and I’m not installing tailscale on all of their devices so s relay is an unhappy compromise.
For bonus points I run pihole to pretty up the domain names to service.swirl and run a homarr instance so no-one needs to remember anything except home.swirl, but if they do remember immich.swirl that works too.
If there are many ways to skin a cat I believe I chose to use a spoon, don’t be like me. Updating each dockge instance is a couple minutes and updating diet pi is a few minutes more which, individually, is not a lot on my weekly/monthly maintence respectfully. But on aggregate… I have checklists. One day I’ll write a script that will ssh into a machine > update/upgrade the os > docker compose pull/rebuild/purge> move on to the next relay… That’ll be my impetus to learn how to write a script.
That’ll be my impetus to learn how to write a script.
This part caught my eye. You were able to do all that other stuff without ever attempting to write a script? That’s surprising and awesome. Assuming you are running everything on a linux server, I feel like a bash script that is run via a cronjob would be your best bet, no need to ssh into the server, just let it do it on it’s own. I haven’t tested any of this but I do have scripts I wrote that do automatic ZFS backups and scrubs; the order should go something like:
open the terminal on the server and type
mkdir scripts
cd scripts
nano docker-updates.sh
type something along the lines of this (I’m still learning docker so adjust the commands to your needs)
#!/bin/bash cd /path/to/scripts/docker-compose.yml docker compose pull && docker compose up -d docker image prune -f
save the file and then type
sudo chmod +x ./docker-updates.sh
to make it executableand finally set up a cronjob to run the script at specific intervals. type
crontab -e
or
sudo crontab -e
(this is if you want to run the script as root but ideally, you just add your user to the docker group so this shouldn’t be needed)and at the bottom of the file type this and save, that’s it:
# runs script at 1am on the first of every month 0 1 1 * * /path/to/scripts/docker-updates.sh
this website will help you choose a different interval
For OS updates you basically do the same thing except the script would look something like: (I forget if you need to type “sudo” or not; it’s running as root so I don’t think you need it but maybe try it with sudo in front of both "apt"s if it’s not working. Also use whatever package manager you have if you aren’t using apt)
while in the scripts folder you created earlier
nano os-updates.sh
#!/bin/bash apt update -y && apt upgrade -y reboot now
save and don’t forget to make it exectuable
then use
sudo crontab -e
(because you’ll need root privileges to update. this will run the script as root without requiring you to input your password)# runs script at 12am on the first of every month 0 0 1 * * /path/to/scripts/os-updates.sh
I did think about cron but, long ago, I heard it wasn’t best practice to update through cron because the lack of logs makes things difficult to see where things went wrong, when they do.
I’ve got automatic-upgrades running on stuff so it’s mostly fine. Dockge is running purely to give me a way to upgrade docker images without having to ssh. It’s just the monthly routine of “apt update && apt upgrade -y” *5 that sucks.
Thank you for the advice though. I’ll probably set cron to update the images with the script as you suggest. I have a “maintenance” homarr page as a budget uptime kuma so I can quickly look there to make sure everything is pinging at least. I made the page so I can quickly get to everyone’s dockge, pihole and nginx but the pings were a happy accident.
the lack of logs
That’s the best part, with a script, you can pipe the output of the updates into a log file you create yourself. I don’t currently do that, if something breaks, I just roll back to a previous snapshot and try again later but it’s possible and seemingly straight forward.
This askubuntu link will probably help
All good info. Thank you kindly.
Nginx Proxy Manager + LetsEncrypt.
A fairly common setup is something like this:
Internet -> nginx -> backend services.
nginx is the https endpoint and has all the certs. You can manage the certs with letsencrypt on that system. This box now handles all HTTPS traffic to and within your network.
The more paranoid will have parts of this setup all over the world, connected through VPNs so that “your IP is safe”. But it’s not necessary and costs more. Limit your exposure, ensure your services are up-to-date, and monitor logs.
fail2ban can give some peace-of-mind for SSH scanning and the like. If you’re using certs to authenticate rather than passwords though you’ll be okay either way.
Update your servers daily. Automate it so you don’t need to remember. Even a simple “doupdates” script that just does “apt-get update && apt-get upgrade && reboot” will be fine (though you can make it more smart about when it needs to reboot). Have its output mailed to you so that you see if there are failures.
You can register a cheap domain pretty easily, and then you can sub-domain the different services. nginx can point “x.example.com” to backend service X and “y.example.com” to backend service Y based on the hostname requested.
I would recommend automating only daily security updates, not all updates.
Ubuntu and Debian have “unattended-upgrades” for this. RPM-based distros have an equivalent.
Agree - good point.
- I got started with a guide from these guys back in 2020. I still use traefik as my reverse proxy and Authelia for authentication and it has worked great all this time. As someone else said, everything is in containers on the one host and it is super easy this way. It all runs on a single box using containers for separation. I should probably look into a secondary server as a live backup, but that’s a lot of work / expense. I have a Cloudflare dynamic DNS container running for that.
- I would definitely advocate for owning your own domain, for the added use case of owning your own email addresses. I can now switch email providers and don’t have to worry about losing anything. This would also lean towards a more memorable domain, or at least a second domain that is memorable. Stay away from the country TLDs or “cute” generic TLDs and stay with a tried and true .com or .net (which may take some searching).
- I don’t bother with this, I just run my server behind Cloudflare, and let them protect my server. Some might disagree, but it’s easy for me and I like that.
- Containers, containers, containers! Probably Docker since it’s easy, but Podman if you really want to get fancy / extra secure. Also, make sure you have a git repo for your compose files, and a solid backup strategy from the start (so much easier than going back and doing it later). I use Backblaze for my backups and it’s $2/month for some peace of mind.
- Do it!!!
I’m an idiot and never linked the link
https://www.smarthomebeginner.com/authentik-docker-compose-guide-2025/
Tailscale is completely transparent on any devices I’ve used it on. Install, set up, and never look at it again because unless it gets turned off, it’s always on.
I’ve run into a weird issue where on my phone, tailscale will disconnect and refuse to reconnect for a seemingly random amount of time but usually less than hour. It doesn’t happen often but it is often enough that I’ve started to notice. I’m not sure if it’s a network issue or app issue but during that time, I can’t connect to my services. All that to say, my tolerance for that is higher than my partner’s; the first time something didn’t work, they would stop using it lol
So I have it running on about 20 phones for customers of mine that use Blue Iris with it. But these are all Apple devices, I’m the only one with Android. I’ve never had a complaint except one person that couldn’t get on at all, and we found that for some reason the Blue Iris app was blacklisted in the network settings from using the VPN. But that’s the closest I’ve seen to your problem.
I wonder if you set up a ping every 15 seconds from the device to the server if that would keep the tunnel active and prevent the disconnect. I don’t think tailscale has a keepalive function like a wireguard connection. If that’s too much of a pain, you might want to just implement Wireguard yourself since you can set a KeepAlive value and the tunnel won’t go idle. Tailscale is probably wanting to reduce their overhead so they don’t include a keepalive.
relatable
I use nginx manager in its own docker container on my unraid server. Was pretty simple to set up all things considered. I would call myself better with hardware than software but not a complete newb and I got it running with minimal headache.