Hello! 😀
I want to share my thoughts on docker and maybe discuss about it!
Since some months I started my homelab and as any good “homelabing guy” I absolutely loved using docker. Simple to deploy and everything. Sadly these days my mind is changing… I recently switch to lxc containers to make easier backup and the xperience is pretty great, the only downside is that not every software is available natively outside of docker 🙃
But I switch to have more control too as docker can be difficult to set up some stuff that the devs don’t really planned to.
So here’s my thoughts and slowly I’m going to leave docker for more old-school way of hosting services. Don’t get me wrong docker is awesome in some use cases, the main are that is really portable and simple to deploy no hundreds dependencies, etc. And by this I think I really found how docker could be useful, not for every single homelabing setup, and it’s not my case.

Maybe I’m doing something wrong but I let you talk about it in the comments, thx.

  • SpazOut@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    23 hours ago

    For me the power of docker is its inherent immutability. I want to be able to move a service around without having to manual tinker, install packages and change permissions etc. It’s repeatable and reliable. However, to get to the point of understanding enough about it to do this reliably can be a huge investment of time. As a daily user of docker (and k8s) I would use it everyday over a VM. I’ve lost count of the number of VMs I’ve setup following installation guidelines, and missed a single step - so machines that should be identical aren’t. I do however understand the frustration with it when you first start, but IMO stick with it as the benefits are huge.

  • Auli@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    And I’ve done the exact opposite moves everything off of lxc to docker containers. So much easier and nicer less machines to maintain.

  • Decq@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    I’ve never really like the convoluted docker tooling. And I’ve been hit a few times with a docker image uodates just breaking everything (looking at you nginx reverse proxy manager…). Now I’ve converted everything to nixos services/containers. And i couldn’t be happier with the ease of configuration and control. Backup is just.a matter of pushing my flake to github and I’m done.

  • SanndyTheManndy@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    I used docker for my homeserver for several years, but managing everything with a single docker compose file that I edit over SSH became too tiring, so I moved to kubernetes using k3s. Painless setup, and far easier to control and monitor remotely. The learning curve is there, but I already use kubernetes at work. It’s way easier to setup routing and storage with k3s than juggling volumes was with docker, for starters.

      • SanndyTheManndy@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        Several services are interlinked, and I want to share configs across services. Docker doesn’t provide a clean interface for separating and bundling network interfaces, storage, and containers like k8s.

  • beerclue@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    3 days ago

    I’m actually doing the opposite :)

    I’ve been using vms, lxc containers and docker for years. In the last 3 years or so, I’ve slowly moved to just docker containers. I still have a few vms, of course, but they only run docker :)

    Containers are a breeze to update, there is no dependency hell, no separate vms for each app…

    More recently, I’ve been trying out kubernetes. Mostly to learn and experiment, since I use it at work.

  • markc@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    2 days ago

    Docker is a convoluted mess of overlays and truly weird network settings. I found that I have no interest in application containers and would much prefer to set up multiple services in a system container (or VM) as if it was a bare-metal server. I deploy a small Proxmox cluster with Proxmox Backup Server in a CT on each node and often use scripts from https://community-scripts.github.io/ProxmoxVE/. Everything is automatically backed up (and remote sync’d twice) with a deduplication factor of 10. A Dockerless Homelab FTW!

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Yeah I share your point of view and I think I’m going this way. These scripts are awesome but I prefer writing mine as I get more control over them

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 days ago

    Are you using docker-compose and local bind mounts? I’d not, you’re making backing up uch harder than it needs to be. Its certainly easier than backing up LXCs and a whole lot easier to restore.

  • CameronDev@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 days ago

    Are you using docker compose scripts? Backup should be easy, you have your compose scripts to configure the containers, then the scripts can easily be commited somewhere or backed up.

    Data should be volume mounted into the container, and then the host disk can be backed up.

    The only app that I’ve had to fight docker on is Seafile, and even that works quite well now.

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      using docker compose yeah. I find hard to tweak the network and the apps settings it’s like putting obstacles on my road

      • CameronDev@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 days ago

        Its networking is a bit hard to tweak, but I also dont find I need to most of the time. And when I do, its usually just setting the network to host and calling it done.

      • oshu@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        3 days ago

        Docker as a technology is a misguided mess but it is an effective tool.

        Podman is a much better design that solves the same problem.

        Containers can be used well or very poorly.

        Docker makes it easy to ship something without knowing anything about System Engineering which some see as an advantage, but I don’t.

        At my shop, we use almost no public container images because they tend to be a security nightmare.

        We build our own images in-house with strict rules about what can go inside. Otherwise it would be absolute chaos.

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Cool I don’t want to know about system engineering and if they is your requirement to use software then nobody would be using it.

  • InnerScientist@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    I use podman using home-manager configs, I could run the services natively but currently I have a user for each service that runs the podman containers. This way each service is securely isolated from each other and the rest of the system. Maybe if/when NixOS supports good selinux rules I’ll switch back to running it native.

    • agile_squirrel@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      This sounds great! I’d love to see your config. I’m not using home manager, but have 1 non root user for all podman containers. 1 user per service seems like a great setup.

      • InnerScientist@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        Yeah it works great and is very secure but every time I create a new service it’s a lot of copy paste boilerplate, maybe I’ll put most of that into a nix function at some point but until then here’s an example n8n config, as loaded from the main nixos file.

        I wrote this last night for testing purposes and just added comments, the config works but n8n uses sqlite and probably needs some other stuff that I hadn’t had a chance to use yet so keep that in mind.
        Podman support in home-manager is also really new and doesn’t support pods (multiple containers, one loopback) and some other stuff yet, most of it can be compensated with the extraarguments but before this existed I used pure file definitions to write quadlet/systemd configs which was even more boilerplate but also mostly copypasta.

        Gaze into the boilerplate
        { config, pkgs, lib, ... }:
        
        {
            users.users.n8n = {
                # calculate sub{u,g}id using uid
                subUidRanges = [{
                    startUid = 100000+65536*( config.users.users.n8n.uid - 999);
                    count = 65536;
                }];
                subGidRanges = [{
                    startGid = 100000+65536*( config.users.users.n8n.uid - 999);
                    count = 65536;
                }];
                isNormalUser = true;
                linger = true; # start user services on system start, fist time start after `nixos-switch` still has to be done manually for some reason though
                openssh.authorizedKeys.keys = config.users.users.root.openssh.authorizedKeys.keys; # allows the ssh keys that can login as root to login as this user too
            };
            home-manager.users.n8n = { pkgs, ... }:
            let
                dir = config.users.users.n8n.home;
                data-dir = "${dir}/${config.users.users.n8n.name}-data"; # defines the path "/home/n8n/n8n-data" using evaluated home paths, could probably remove a lot of redundant n8n definitions....
            in
            {
                home.stateVersion = "24.11";
                systemd.user.tmpfiles.rules =
                let
                    folders = [
                        "${data-dir}"
                        #"${data-dir}/data-volume-name-one" 
                    ];
                    formated_folders = map (folder: "d ${folder} - - - -") folders; # a function that takes a path string and formats it for systemd tmpfiles such that they get created as folders
                in formated_folders;
        
                services.podman = {
                    enable = true;
                    containers = {
                        n8n-app = { # define a container, service name is "podman-n8n-app.service" in case you need to make multiple containers depend and run after each other
                            image = "docker.n8n.io/n8nio/n8n";
                            ports = [
                                "${config.local.users.users.n8n.listenIp}:${toString config.local.users.users.n8n.listenPort}:5678" # I'm using a self defined option to keep track of all ports and uids in a seperate file, these values just map to "127.0.0.1:30023:5678", a caddy does a reverse proxy there with the same option as the port.
                            ];
                            volumes = [
                                "${data-dir}:/home/node/.n8n" # the folder we created above
                            ];
                            userNS = "keep-id:uid=1000,gid=1000"; # n8n stores files as non-root inside the container so they end up as some high uid outside and the user which runs these containers can't read it because of that. This maps the user 1000 inside the container to the uid of the user that's running podman. Takes a lot of time to generate the podman image for a first run though so make sure systemd doesn't time out
                            environment = {
                                # MYHORSE = "amazing";
                            };
                            # there's also an environmentfile option for secret management, which works with sops if you set the owner of the secret/secret template
                            extraPodmanArgs = [
                                "--pull=newer" # always pull newer images when starting, I could make this declaritive but I haven't found a good way to automagically update the container hashes in my nix config at the push of a button.
                            ];
                         # few more options exist that I didn't need here
                        };
                    };
                };
            };
        }
        
        
  • mesamune@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    3 days ago

    Honestly after using docker and containerization for more than a decade, my home setups are just yunohost or baremetal (a small pi) with some periodic backups. I care more about my own time now than my home setup and I want things to just be stable. Its been good for a couple of years now, without anything other than some quick updates. You dont have to deal with infa changes with updates, you dont have to deal with slowdowns, everything works pretty well.

    At work its different Docker, Kubernetes, etc… are awesome because they can deal gracefully with dependencies, multiple deploys per day, large infa. But ill be the first to admit that takes a bit more manpower and monitoring systems that are much better than a small home setup.

    • WeAreAllOne@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      I tend to also agree with your opinion,but lately Yunohost have quite few broken apps, they’re not very fast on updates and also not many active developers. Hats off to them though because they’re doing the best they can !

      • mesamune@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        I have to agree, the community seems to come and go. Some apps have daily updates and some have been updated only once. If I were to start a new server, I would probably still pick yunohost, but remove some of the older apps as one offs. The lemmy one for example is stuck on a VERY old version. However the GotoSocial app is updated every time there is an update in the main repo.

        Still super good support for something that is free and open source. Stable too :) but sometimes stability means old.

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      3 days ago

      yeah I think that at the end even if it seems a bit “retro” the “normal install” with periodic backups/updates on default vm (or even lxc containers) are the best to use, the most stable and configurable

      • Auli@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        How isit lore stable or configurable? I have docker containers running backup the my folder daily where all the data lives off-site. Also backup the whole container daily onsite. I have found it so easy. I admit it was a pain to learn but after everything was moved over it has been easier.

      • mesamune@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Do you use any sort of RAID? Recently, ive been using an old SSD, but back 9ish years ago, I used to backup everything with a RAID system, but it took too much time to keep up.

  • N.E.P.T.R@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    3 days ago

    Docker is good when combined with gVisor runtime for better isolation.

    What is gVisor?

    gVisor is an application kernel, written in memory safe Golang, that emulates most system calls and massively reduces the attack surface of the kernel. This is important since the host and guest share the same kernel, and Docker runs rootful. Root inside a Docker container is the same as root on the host, as long as a sandbox escape is used. This could arise if a container image requires unsafe permissions like Docker socket access. gVisor protects against privilege escalation by only using root at the start and never handing root over to the guest.

    Sydbox OCI runtime is also cool and faster than gVisor (both are quick)

  • huskypenguin@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    3 days ago

    I love docker, and backups are a breeze if you’re using ZFS or BTRFS with volume sending. That is the bummer about docker, it relies on you to back it up instead of having its native backup system.

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      What are you hosting on docker? Are you configuring your apps after? Did you used the prebuild images or build yourself?

      • huskypenguin@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        I use the *arr suite, a project zomboid server, a foundry vtt server, invoice ninja, immich, next cloud, qbittorrent, and caddy.

        I pretty much only use prebuilt images, I run them like appliances. Anything custom I’d run in a vm with snapshots as my docker skills do not run that deep.

        • foremanguy@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          This why I don’t get anything from using docker I want to tweak my configuration and docker is adding an extra level of complexity

      • huskypenguin@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        I should also say I use portainer for some graphical hand holding. And I run watchtower for updates (although portainer can monitor GitHub’s and run updates based on monitored merged).

        For simplicity I create all my volumes in the portainer gui, then specify the mount points in the docker compose (portainer calls this a stack for some reason).

        The volumes are looped into the base OS (Truenas scale) zfs snapshots. Any restoration is dead simple. It keeps 1x yearly, 3x monthly, 4x weekly, and 1x daily snapshot.

        All media etc… is mounted via NFS shares (for applications like immich or plex).

        Restoration to a new machine should be as simple as pasting the compose, restoring and restoring the Portainer volumes.

        • foremanguy@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          I don’t really like portainer, first their business model is not that good and second they are doing strange things with the compose files

          • IrateAnteater@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            I’m learning to hate it right now too. For some reason, its refusing to upload a local image from my laptop, and the alarm that comes up tells me exactly nothing useful.

  • PerogiBoi@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    3 days ago

    I don’t like docker. It’s hard to update containers, hard to modify specific settings, hard to configure network settings, just overall for me I’ve had a bad experience. It’s fantastic for quickly spinning things up but for long term usecase and customizing it to work well with all my services, I find it lacking.

    I just create Debian containers or VMs for my different services using Proxmox. I have full control over all settings that I didn’t have in docker.