Hey! I have been using Ansible to deploy Dockers for a few services on my Raspberry Pi for a while now and it’s working great, but I want to learn MOAR and I need help…

Recently, I’ve been considering migrating to bare metal K3S for a few reasons:

  • To learn and actually practice K8S.
  • To have redundancy and to try HA.
  • My RPi are all already running on MicroOS, so it kind of make sense to me to try other SUSE stuff (?)
  • Maybe eventually being able to manage my two separated servers locations with a neat k3s + Tailscale setup!

Here is my problem: I don’t understand how things are supposed to be done. All the examples I find feel wrong. More specifically:

  • Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too “by hand”! Is there a more scripted way to do it? Should I stay with everything in Ansible ??
  • I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
  • Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard ?!

I feel that having a K3S + Traefik + Longhorn + Rancher on MicroOS should be straightforward, but it’s really not.

It’s very much a noob question, but I really want to understand what I am doing wrong. I’m really looking for advice and especially configuration examples that I could try to copy, use and modify!

Thanks in advance,

Cheers!

  • killabeezio@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    14 hours ago

    You have a lot of responses here, but I’ll tell what k8s actually is, since a lot of people seem to get this wrong.

    Just like k8s, docker has many tools. Although docker is packaged in a way, that it looks like it’s just 1 tool. This is docker desktop. Under the hood there is docker engine that is really a runtime and image management service and API. You can look at this more if you wanted. There is containerd, runc, cri-o. These were all created so that different implementations can all talk to this API in a standard way and work.

    Moving on to k8s. K8s is a way to scale these containers to run in different ways and scale horizontally. There are ways to even scale nodes vertically and horizontally to allow for more or less resources to place these containers on. This means k8s is very event driven and utilizes a lot of APIs to communicate and take action.

    You said that you are doing kubectl apply constantly and you say feels wrong. In reality, this is correct. Under the hood you are talking with the k8s control plane and it’s taking that manifest and storing it. Other services are communicating with the control plane to understand what they have to do. In fact you can apply a directory of manifests, so you don’t have to specify each file individually.

    Again there are many tools you can use to manage k8s. It is an orchestration system to manage pods and run them. You get to pick what tool you want to use. If you want something you can do from a git repo, you can use something like argocd or flux. This is considered to be gitops and more declarative. If you need a templating implementation, there are many, like helm, json net, and kustomize (although not a full templating language). These can help you define your manifests in a more repeatable and meaningful way, but you can always apply these using the same tools (kubectl, argocd, flux, etc…)

    There are many services that can run in k8s that will solve one problem or another and these tools scale themselves, since they mostly all use the same designs that keep scalability in mind. I kept things very simple, but try out vanilla k8s first to understand what is going on. It’s great that you are questioning these things as it shows you understand there is probably something better that you can do. Now you just need to find the tools that are right for you. Ask what you hate or dislike about what you are doing and find a way to solve that and if there are any tools that can help. https://landscape.cncf.io/ is a good place to start to see what tools exist.

    Anyway, good luck on your adventure. K8s is an enterprise tool after all and it’s not really meant for something like a home lab. It’s an orchestration system and NOT a platform that you can just start running stuff on without some effort. Getting it up and running is day 1 operations. Managing it and keeping it running is day 2 operations.

  • UnsavoryMollusk@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    20 hours ago

    I use Kube everyday for work but I would recomend you to not use it. It’s complicated to answer problems you don’t care about. How about docker swarm, or podman services ?

    • Keelhaul@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      14 hours ago

      I disagree, it is great to use. Yes, some things are more difficult but as OP mentioned he wants to learn more, and running your own cluster for your services is an amazing way to learn k8s.

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    I’ve thought about k8s, but there is so much about Docker that I still don’t fully know.

  • non_burglar@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    K3s (and k8s for that matter) expect you to build a hierarchy of yaml configs, mostly because spinning up docker instances will be done in groups with certains traits applying to whole organization, certain ones applying only to most groups, but not all, and certain configs being special for certain services (http nodes added when demand is higher than x threshold).

    But I wonder why you want to cluster navidrome or pihole? Navidrome would require a significant load before service load balancing is required (and non-trivial to implement), and pihole can be put behind a round-robin DNS forwarder, and also be weird to implement behind load balancing.

  • moonpiedumplings@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    Firstly, I want to say that I started with podman (alternative to docker) and ansible, but I quickly ran into issues. The last issue I encountered, and the last straw, was that creating a container, I was frustrated because Ansible would not actually change the container unless I used ansible to destroy and recreate it.

    Without quadlets, podman manages it’s own state, which has issues, and was the entire reason I was looking into alternatives to podman for managing state.

    More research: https://github.com/linux-system-roles/podman: I found an ansible role to generate podman quadlets, but I don’t really want to include ansible roles in my existing ansible roles. Also, it intakes kubernetes yaml, which is very complex for what I am trying to do. At that point, why not just use a single node kubernetes cluster and let kubernetes manage state?

    So I switched to Kubernetes.

    To answer some of your questions:

    Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too “by hand”! Is there a more scripted way to do it? Should I stay with everything in Ansible ??

    So what I (and the industry) uses is called “GitOps”. It’s essentially you have a git repo, and the software automatically pulls the git repo and applies the configs.

    Here is my gitops repo: https://github.com/moonpiedumplings/flux-config. I use FluxCD for GitOps, but there are other options like Rancher’s Fleet or the most popular ArgoCD.

    As a tip, you can search github for pieces of code to reuse. I usually do path:*.y*ml keywords keywords to search for appropriate pieces of yaml.

    I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?

    So the first issue is that Kubernetes doesn’t really have “containers”. Instead, the smallest controllable unit in Kubernetes is a “pod”, which is a collection of containers that share a network device. Of course, pods for selfhosted services like the type this community is interested in will rarely have more than one container in them.

    There are ways to convert a docker-compose to a kubernetes pod.

    But in general, Kubernetes doesn’t use compose files for premade services, but instead helm charts. If you are having issues installing specific helm charts, you should ask for help here so we can iron them out. Helm charts are pretty reliable in my experience, but they do seem to be more involved to set up than docker-compose.

    Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard

    So what you’re supposed to do is deploy an “ingress”, (k3s comes with traefik by default), and then use cert-manager to automatically apply get letsencrypt certs for ingress “objects”.

    Actually, traefik comes with it’s own way to get SSL certs (in addition to ingresses and cert manager), so you can look into that as well, but I decided to use the standardized ingress + cert-manager method because it was also compatible with other ingress software.

    Although it seems complex, I’ve come to really, really love Kubernetes because of features mentioned here. Especially the declarative part, where all my services can be code in a git repo.

  • testgoofy@infosec.pub
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 day ago

    Hey there,

    I made a similar journey a few years ago. But I only have one home server and do not run my services in high availability (HA). As @non_burglar@lemmy.world mentioned, to run a service in HA, you need more than “just scaling up”. You need to exactly know what talks when to whom. For example, database entries or file writes will be difficult when scaling up a service not ready for HA.

    Here are my solutions for your challenges:

    • No, you are not supposed to run kubectl apply -f for each file. I would strongly recommend helm. Then you just have to run helm install per service. If you want to write each service by yourself, you will end up with multiple .yaml files. I do it this way. Normally, you create one repository per service, which holds all YAML files. Alternatively, you could use a predefined Helm Chart and just customize the settings. This is comparable to DockerHub.
    • If you want to deploy to a cluster, you just have to deploy to one server. If in your .yaml configuration multiple replicas are defined, k8s will automatically balance these replicas on multiple servers and split the entire load on all servers in the same cluster. If you just look for configuration examples, look into Helm Charts. Often service provide examples only for Docker (and Docker Compose) and not for K8s.
    • As I see it, you only have to run a single line of install script on your first server and afterward join the cluster with the second server. Then you have k3s deployed. Traefik will be installed alongside k3s. If you want to access the dashboard of Traefik and install rancher and longhorn, yes, you will have to run multiple installations. Since you already have experience with Ansible, I suggest putting everything for the “base installation” into one playbook and then executing this playbook one.

    Changelog:

    • Removeing k3s install command. If you want to use it, look it up on the official website. Do not copy paste the command from a random user on lemmy ;) Thanks to @atzanteol@sh.itjust.works for bringing up this topic.
    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      curl -sfL https://get.k3s.io/ | sh -

      Never, ever install anything this way. The trend of “just run this shell script off the internet” is a menace. You don’t know what that script does, what repositories it may add, what it may install, whether somebody is typo-squatting the URL and you’re running something else, etc.

      It’s just a bad idea. If you disagree then I have one question - how would you uninstall k3s after you ran that blackbox?

      • testgoofy@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Yes, just running a random script from the internet is a very bad idea. You should also not copy and paste the command from above, since I’m only a random lemmy user. Nevertheless, if you trust k3s, and they promote this command on the official website (make sure it’s the official one) you can use it. As you want to install k3s, I’m going to assume you trust k3s.

        If you want to review the script, go for it. And you should, I agree. I for myself reviewed (or at least looked over it) when I used the script for myself.

        For the uninstallment: just follow the instructions on the official website and run /usr/local/bin/k3s-uninstall.sh source

        • atzanteol@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 day ago

          I really want to push back on the entire idea that it’s okay to distribute software via a curl | sh command. It’s a bad practice. I shouldn’t be reading 100’s of lines of shell script to see what sort of malarkey your installer is going to do to my system. This application creates an uninstall script. Neat. Many don’t.

          Of the myriad ways to distribute Linux software (deb, rpm, snap, flatpak, AppImage) an unstructured shell script is by far the worst.

          • moonpiedumplings@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            I think that distributing general software via curl | sh is pretty bad for all the reasons that curl sh is bad and frustrating.

            But I do make an exception for “platforms” and package managers. The question I ask myself is: “Does this software enable me to install more software from a variety of programming languages?”

            If the answer to that question is yes, which is is for k3s, then I think it’s an acceptable exception. curl | sh is okay for bootstrapping things like Nix on non Nix systems, because then you get a package manager to install various versions of tools that would normally try to get you to install themselves with curl | bash but then you can use Nix instead.

            K3s is pretty similar, because Kubernetes is a whole platform, with it’s own package manager (helm), and applications you can install. It’s especially difficult to get the latest versions of Kubernetes on stable release distros, as they don’t package it at all, so getting it from the developers is kinda the only way to get it installed.

            Relevant discussion on another thread: https://programming.dev/post/33626778/18025432

            One of my frustrations that I express in the linked discussion is that it’s “developers” who are making bash scripts to install. But k3s is not just developers, it’s made by Suse who has their own distro, OpenSuse, using OpenSuse tooling. It’s “packagers” making k3s and it’s install script, and that’s another reason why I find it more acceptable.

            • atzanteol@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              Microk8s manages to install with a snap. I know that snap is “of the devil” around these parts but it’s still better than a custom bash script.

              Custom bash scripts will always be worse than any alternative.

              • moonpiedumplings@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                I’ve tried snap, juju, and Canonical’s suite. They were uniquely frustrating and I’m not interested in interacting with them again.

                The future of installing system components like k3s on generic distros is probably systemd sysexts, which are extension images that can be overlayed onto a base system. It’s designed for immutable distros, but it can be used on any standard enough distro.

                There is a k3s sysext, but it’s still in the “bakery”. Plus sysext isn’t in stable release distros anyways.

                Until it’s out and stable, I’ll stick to the one time bash script to install Suse k3s.

                • atzanteol@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 day ago

                  You’re welcome to make whatever bad decisions you like. I can manage snaps with standard tooling. I can install, update, remove them with simple ansible scripts in a standard way.

                  Bash installers are bad. End of.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    2 days ago

    Yeah - k8s has a bit of a steep learning curve. I recentlyish make the conversion from “a bunch of docker-compose files” to microk8s myself. So here are some thoughts for you (in no particular order).

    I would avoid helm like the plague. Everybody is going to recommend it to you but it just puts a wrapper on a wrapper and is MUCH more complicated than what you’re going to need because you’re not spinning up hundreds of similar-but-different services. Making things into templates adds a ton of complexity and overhead. It’s something for a vendor to do, not a home-gamer. And you’re going to need to understand the basics before you can create helm charts anyway.

    The actual yml files you need are actually relatively simple compared to a helm chart that needs to be parameterized and support a bazillion features.

    So yes - you’re going to create a handful of yml files and kubectl apply -f them. But - you can do that with Ansible if you want, or you can combine them into a single yml (separate sections with ----).

    What I do is - for each service I create a directory. In it I have name_deployment.yml, name_service.yml, name_ingress.ymlandname_pvc.yml`. I just apply them when I change them, which isn’t frequent. Each application I deploy generally has its own namespace for all its resources. I’ll combine deployments into a NS if they’re closely related (e.g. prometheus and grafana are in the same NS).

    Do yourself a favor and install kubens which lets you easily see and change your namespace globally. Gawd I hate having to type out my namespace for everything. 99% of the time when you can’t find a thing with kubectl get you’re not looking in the right namespace.

    You’re going to need to sort out your storage situation. I use NFS for long-term storage for my pods and have microk8s configured to automatically create space on my NFS server when pods request a PV (persistent volume). You can also use local directories but that won’t cluster.

    There are two basic types of “ingress” load balancing. “ClusterIp” means the cluster controller will act like a hostname-based router for HTTP. You can point your DNS entries at that server and it will route to your pods on their internal IP address based on the DNS name of the request. It’s easy to use and works very well - but it only works for HTTP traffic. The other is to use LoadBalancerIp that will give your pods an IP address on the network that you can connect to directly. The former only works for HTTP, the latter will let you use any ports (e.g. ssh for a forgejo instance).

    • testgoofy@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      I agree. k8s and helm have a steep learning curve. I have an engineering background and understand k8s in and out. Therefore, for my helm is the cleanest solution. I would recommend getting to know k8s and it’s resources, before using (or creating) helm charts.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Yeah - I did come down a bit harder on helm charts than perhaps I intended - but starting out with them was a confusing mess for me. Especially since they all create a new ‘custom-to-this-thing’ config file for you to work with rather than ‘standard yml you can google’. The layer of indirection was very confusing when I was learning. Once I abandoned them and realized how simple a basic deployment in k8s really is then I was able to actually make progress.

        I’ve deployed half a dozen or so services now and I still don’t think I’d bother with helm for any of it.

  • towerful@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 days ago

    Everyone talks about helm charts.
    I tried them and hate writing them.
    I found garden.io, and it makes a really nice way to consume repos (of helm charts, manifests etc) and apply them in a sensible way to a k8s cluster.
    Only thing is, it seems to be very tailored to a team of developers. I kinda muddled through with it, and it made everything so much easier.
    Although I massively appreciate that helm charts are used for most projects, they make sense for something you are going to share.
    But if it’s a solo project or consuming other people’s projects, I don’t think it really solves a problem.

    Which is why I used garden.io. Designed for deploying kubernetes manifests, I found it had just enough tooling to make things easier.
    Though, if you are used to ansible, it might make more sense to use ansible.
    Pretty sure ansible will be able to do it all in a way you are familiar with.

    As for writing the manifests themselves, I find it rare I need to (unless it’s something I’ve made myself). Most software has a k8s helm chart. So I just reference that in a garden file, set any variables I need to, and all good.
    If there aren’t helm charts or kustomize files, then it’s adapting a docker compose file into manifests. Which is manual.
    Occasionally I have to write some CRDs, config maps or secrets (CMs and secrets are easily made in garden).

    I also prefer to install operators, instead of the raw service. For example, I use Cloudnative Postgres to set up postgres databases.
    I create a CRD that defines the database, and CNPG automatically provisions all the storage, pods, services, config maps and secrets.

    The way I use kubernetes for the projects I do is:
    Apply all the infrastructure stuff (gateways, metallb, storage provisioners etc) from helm files (or similar).
    Then apply all my pods, services, certificates etc from hand written manifests.
    Using garden, I can make sure things are deployed in the correct order: operators are installed before trying to apply a CRD, secrets/cms created before being referenced etc.
    If I ever have to wipe and reinstall a cluster, it takes me 30 minutes or so from a clean TalosOS install to the project up and running, with just 3 or 4 commands.

    Any on-the-fly changes I make, I ensure I back port to the project configs so when I wipe, reset, reinstall I still get what I expect.

    However, I have recently found https://cdk8s.io/ and I’m meaning to investigate that for creating the manifests themselves.
    Write code using a typed language, and have cdk8s create the raw yaml manifests. Seems like a dream!
    I hate writing yaml. Auto complete is useless (the editor has no idea what format the yaml doc should take), auto formatting is useless (mostly because yaml is whitespace sensitive, and the editor has no idea what things are a child or a new parent). It just feels ugly and clunky.

      • towerful@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Interesting, I might check them out.
        I liked garden because it was “for kubernetes”. It was a horse and it had its course.
        I had the wrong assumption that all those CD tools were specifically tailored to run as workers in a deployment pipeline.

        I’m willing to re-evaluate my deployment stack, tbh.
        I’ll definitely dig more into flux and ansible.
        Thanks!

        • moonpiedumplings@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          that all those CD tools were specifically tailored to run as workers in a deployment pipeline

          That’s CI 🙃

          Confusing terms, but yeah. With ArgoCD and FluxCD, they just read from a git repo and apply it to the cluster. In my linked git repo, flux is used to install “helmreleases” but argo has something similar.