So I have rebuilt my Production rack with very little in terms of an actual software plan.

I host mostly docker contained services (Forgejo, Ghost Blog, OpenWebUI, Outline) and I was previously hosting each one in their own Ubuntu Server VM on Proxmox thus defeating the purpose.

So I was going to run a VM on each of these Thinkcentres that worked as a Kubernetes Cluster and then ran everything on that. But that also feels silly since these PCs are already Clustered through Proxmox 9.

I was thinking about using LXC but part of the point of the Kubernetes cluster was to learn a new skill that might be useful in my career and I don’t know how this will work with Cloudflared Tunnels which is my preferred means of exposing services to the internet.

I’m willing to take a class or follow a whole bunch of “how-to” videos, but I’m a little frazzled on my options. Any suggestions are welcome.

  • Sunoc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    20
    ·
    10 days ago

    Damn that’s a good looking mini rack! Great job!

    I don’t have much experience or advice about Proxmox, just wanted to show appreciation ✌️

    • N0x0n@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 days ago

      Was going to say the same ! Such a cutsy nice little mini-rack/server setup !

      Still fighting my spaghetti setup with cables sprouting everywhere.

    • nagaram@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      13
      ·
      11 days ago

      Yeah!

      So i am running these three computers in a set up that let’s me manage virtual machines on them from a website with Proxmox.

      I want to play with a tool that let’s me run Docker Containers. Containers being a way to host services like websites and web apps without having to make a Virtual machine for each app.

      This has a lot of advantages but I’m trying to use the High Availability feature when you run these on a cluster of computers.

      My problem is that I know I can use the Built In container software in the already clustered Proxmox computers called LXC Linux Containers. However, I want to use a container software called Kubernetes but I would have to build Virtual machines on my servers and then cluster those virtual machines.

      Its a little confusing because I have three physical computers clustered together and I’m trying to then build three virtual computers on them and cluster those. Its an odd thing to do and that’s the problem.

      • Brkdncr@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 days ago

        It’s not odd. You’ll need to build the 3 VMs if you want to run Kubernetes and not destroy your existing hypervisor.

  • frongt@lemmy.zip
    link
    fedilink
    English
    arrow-up
    13
    ·
    11 days ago

    Nah, use one VM on each node as the kube host. That’s fine. You’re doing it for fun, you don’t need to min-max your environment.

    You’ll probably want to tear it down and redeploy it eventually anyway. That’s going to be a pain if you’ve installed them on bare metal.

    • nagaram@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 days ago

      Fair point. I was also thinking it would be fun to use CoreOS so I can get one step closer to ArchBTW

  • towerful@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 days ago

    I’d still run k8s inside a proxmox VM. Even if it’s basically all resources dedicated to the VM, proxmox gives you a huge amount of oversight and additional tooling.
    Proxmox doesn’t have to do much (or even anything), beyond provide a virtual machine.

    I’ve ran Talos OS (dedicated k8s distro) bare metal. It was fine, but I wish I had a hypervisor. I was lucky that my project could be wiped and rebuilt with ease. Having a hypervisor would mean I could’ve just rolled back to a snapshot, and separated worker/master nodes without running additional servers.
    This was sorely missed when I was both learning the deployment of k8s, and k8s itself.
    For the next project that is similar, I’ll run talos inside proxmox VMs.

    As far as “how does cloudflare work in k8s”… However you want?
    You could manually deploy the example manifests provided by cloudflare.
    Or perhaps there are some helm charts that can make it all a bit easier?

    Or you could install an operator, which will look for Custom Resource Definitions or specific metadata on standard resources, then deploy and configure the suitable additional resources in order to make it work.
    https://github.com/adyanth/cloudflare-operator seems popular?

    I’d look to reduce the amount of yaml you have to write/configure by hand. Which is why I like operators

    • nagaram@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 days ago

      Quality answer. Glad my hunch was backed up by your experience. That’s very appreciated.

      I hadn’t tried anything with Cloudflared and Kubernetes yet so it would be sick to see it just work.

      • koala@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 days ago

        I think Cloudflare Tunnels will require a different setup on k8s than on regular Linux hosts, but it’s such a popular service among self-hosters that I have little doubt that you’ll find a workable process.

        (And likely you could cheat, and set up a small Linux VM to “bridge” k8s and Cloudflare Tunnels.)

        Kubernetes is different, but it’s learnable. In my opinion, K8S only comes into its own in a few scenarios:

        • Really elastic workloads. If you have stuff that scales horizontally (uncommon), you really can tell Amazon to give you more Kubernetes nodes when load grows, and destroy the nodes when load goes down. But this is not really applicable for self hosting, IMHO.

        • Really clustered software. Setting up say a PostgreSQL cluster is a ton of work. But people create K8S operators that you feed a declarative configuration (I want so many replicas, I want backups at this rate, etc.) and that work out everything for you… in a way that works in all K8S implementations! This is also very cool, but I suspect that there’s not a lot of this in self-hosting.

        • Building SaaS platforms, etc. This is something that might be more reasonable to do in a self-hosting situation.

        Like the person you’re replying to, I also run Talos (as a VM in Proxmox). It’s pretty cool. But in the end, I only run there 4 apps I’ve written myself, so using K8S as a kind of SaaS… and another application, https://github.com/avaraline/incarnator, which is basically distributed as container images and I was too lazy to deploy in a more conventional way.

        I also do this for learning. Although I’m not a fan of how Docker Compose is becoming dominant in the self-hosting space, I have to admit it makes more sense than K8S for self-hosting. But K8S is cool and might get you a cool job, so by all means play with it- maybe you’ll have fun!

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 days ago

    That’s a sick little rack.

    Absolutely follow through with K8S, I recently did this and it’s definitely worth it.

    Running the workers in VMs is a little wasteful. But it’s simplifies your hardware and your backups. My home lab version is 3 VMs in a Proxmox, The idea is, after it’s built and working I can just move those VMs wholesale to other boxes. But realistically, adding workers to K8S is pretty brain dead simple, and draining and migrating the old worker nodes another skill you should be learning.

    You could throw Debian on everything and deploy all your software through Ansible.

    Don’t lose sight of the goal. Get k8s running, push through longhorn, get some pods up in full tolerant mode, learn the networking, The engress the DNS, load balancing, proxies.

    Exactly how you do it is less important than the act of doing it and learning kubectl.

  • non_burglar@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 days ago

    Running the k8s in their own VM will allow you to hedge against mistakes and keep some separation between infra and kube.

    I personally don’t use proxmox anymore, but I deploy with ansible and roles, not k8s anymore.

    • nagaram@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 days ago

      Ansible is next on my list of things to learn.

      I don’t think I’ll need to dedicate all of my compute space to K8s probably just half for now.

      • corsicanguppy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        11 days ago

        Ansible is next on my list of things to learn.

        Ansible is y2k tech brought to you in 2010. Its workarounds for its many problems bring problems of their own. I’d recommend mgmtconfig, but it’s a deep pool if you’re just getting into it. Try Chef(cinc.sh) or saltstack, but keep mgmtconfig on the radar when you want to switch from 2010 tech to 2020 tech.

        • non_burglar@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 days ago

          My issue with mgmt.config is that it bills itself as an api-driven “modern” orchestrator, but as soon as you don’t have systemd on clients, it becomes insanely complicated to blast out simple changes.

          Mgmt.config also claims to be “easy”, but you have to learn MCL’s weird syntax, which the issue I have with chef and its use of ruby.

          Yes, ansible is relatively simple, but it runs on anything (including being supported on actual arm64) and I daresay that layering roles and modules makes ansible quite powerful.

          It’s kind of like nagios… Nagios sucks. But it has such a massive library of monitoring tricks and tools that it will be around forever.

          • corsicanguppy@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            9 days ago

            have to learn MCL’s weird syntax

            You skewer two apps for syntax, but not Ansible’s fucking YAML? Dood. I’m building out a layered declarative config at the day-job, and it’s just page after page with python’s indentation fixation and powershell’s bipolar expressions. This is better for you?

        • kata1yst@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 days ago

          Wow, huge disagree on saltstack and chef being ahead of Ansible. I’ve used all 3 in production (and even Puppet) and watched Ansible absolutely surge onto the scene and displace everyone else in the enterprise space in a scant few years.

          Ansible is just so much lower overhead and so much easier to understand and make changes to. It’s dominating the configuration management space for a reason. And nearly all of the self hosted/homelab space is active in Ansible and have tons of well baked playbooks.

          • corsicanguppy@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            9 days ago

            I’ve used all 3 in production (and even Puppet) and watched Ansible absolutely surge onto the scene and displace everyone else in the enterprise space in a scant few years.

            Popular isn’t always better. See: Betamax/VHS, Blu-ray vs HDDVD, skype/MSSkype, everything vs Teams, everything vs Outlook, everything vs Azure. Ansible is accessible like DUPLO is accessible, man, and with the payola like Blu-ray got and the pressuring like what shot systemd into the frame, of course it would appeal to the C-suite.

            Throwing a few-thousand at Ansible/AAP and the jagged edges pop out – and we have a team of three that is dedicated to Nagios and AAP. And it’s never not glacially slow – orders of magnitude slower than absolutely everything.

            • kata1yst@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 days ago

              Yeah, similar sized environments here too, but had good experiences with Ansible. Saw Chef struggle at even smaller scales. And Puppet. And Saltstack. But I’ve also seen all of them succeed too. Like most things it depends on how you run it. Nothing is a perfect solution. But I think Ansible has few game breaking tradeoffs for it’s advantages.

  • vegetaaaaaaa@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 days ago

    https://lemmy.world/post/34029848/18647964

    • Hypervisor: Debian stable + libvirt or PVE if you need clustering/HA
    • VMs: Debian stable
    • podman if you need containerization below that

    You can migrate VMs live between hosts (it’s a bit more work if you pick libvirt, but the overhead/features or proxmox are sometimes overkill, libvirt is a bit more barebones, each has its uses), have a cluster-wide L2 network, use a machine as backup storage for others… use VM snapshots for rollback, etc. Regardless of containerization/orchestration below that, a full hypervisor is still nice to have.

    I deploy my services directly to the VM or as podman containers in said VMs. I use ansible for all automation/provisioning (though there are still a few basic provisioning/management to bootstrap new VMs, if it works it works)

  • notfromhere@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 days ago

    This is pretty rad! Thanks for sharing. I went down the same road with learning k3s on about 7 Raspberry Pis and pivoted over to Proxmox/Ceph on a few old gaming PCs / Ethereum miners. Now I am trying to optimize the space and looking at how to rack mount my ATX machines with GPUs lol… I was able to get a RTX 3070 to fit in a 2U rack mount enclosure but having some heat issues… going to look at 4U cases with better airflow for the RTX 3090 and various RX480s.

    I am planning to set up Talos VMs (one per Proxmox host) and bootstrap k8s with Traefik and others. If you’re learning, you might want to start with a batteries-included k8s distro like k3s.

    • nagaram@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 days ago

      Apartment is too small and my partner is too noise sensitive to get away with a rack. So my localLLM and Jellyfin encoder plus my NAS exists like this this summer. Temps have been very good once the panels came off.

  • ABetterTomorrow@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 days ago

    Side question…looks like you got the desk pi tower with the raspberry pi rack. Did you figure out what the holes on the side for with that (not sure what it is) expansion slot(looks like you put your labels over it)? Not sure what’s it for…

  • zzffyfajzkzhnsweqm@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 days ago

    I just recently tried to setup k3s in proxmox LXC containers. I had to do everything again after I learned it was not possible to make this setup without comproimising security and isolation. Now I run kubernetes inside virtual machines in proxmox.

    • nagaram@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 days ago

      That’s what I was thinking too. Ijust feel better having another layer between the open web an my server

      • zzffyfajzkzhnsweqm@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 days ago

        To setup kubernetes inside lxc you have to enable quite some capabilities inside host kernel and lxd containers that can be used to escalate privileges from beeing root in container to root in proxmox. Not completely sure but since even containerd containers share the same kernel, attacker might escalate directly from pod to proxmox host. But this last par I am not sure about.