I’m running three servers: one for home automation/NVR, one for NAS/media services, and one for network/firewall services.

Does this breakdown look doable based on the hardware? Should the services be ditributed differently for better efficiency?

Server 1 and 3 are already up and running. I just received my NAS, and am trying to decide where to run each service to best take advantage of my hardware.

I’m also considering UnRaid instead of Proxmox for a NAS OS. I just chose Proxmox because I’m familiar with it, and I like the ability to snapshot. I also intend to run Proxmox Backup Server offsite at some point, and I like the PVE/PBS integration.

Any advice would be much appreciated!

  • lka1988@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    6 days ago

    Just remember the KISS principal: Keep It Simple, Stupid

    Keep the NAS as a NAS, and I would honestly trim down everything else into a clustered hypervisor setup (like Proxmox) with dedicated VMs to run each stack. That way if you need to take a machine down for whatever reason, you can migrate its VMs/containers to another machine, with minimal downtime, so you can do whatever it is you need to do with said machine.

    Full disclosure: this is what I do. I was in your shoes before.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      6 days ago

      I wouldn’t do that unless you have lots of money to blow on crazy hardware. Running separate virtual machines is very inefficient. Instead, run a few virtual machines with a few services in each. I would separate it out into classes based on the load and use case.

      • lka1988@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Instead, run a few virtual machines with a few services in each.

        That’s what I meant, I guess it wasn’t very clear. When I say “stack”, I mean multiple services.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    6 days ago

    I personally would avoid LXC. That seems to be a hot take but in my experience it is better to run docker/podman in a few VMs.

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        Maybe I’m doing it wrong then. I run LXC but has always been a much worse experience. Boot times are terrible and the controls that work for VMs don’t work as well for LXC. You also can live transfer which is problematic for me.

        • ikidd@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          I think you’re doing it wrong. LXCs boot almost instantaneously on a hypervisor since they hijack the host kernel, I’d be surprised if my CTs take 5 seconds.

          I would agree on the live migration issue but I guess you pick your services accordingly. I have a VM that runs docker and a LXC docker host, and I pick my containers for each accordingly.

          • Possibly linux@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 days ago

            How on earth are you getting 5 second boot time with LXC? My containers take around 10 minutes to boot while VMs take a few seconds. Also LXC networking seems to break randomly.

            Edit: I went back and figured it out. It was that IPv6 was set to dhcp in Proxmox which caused everything to halt until timeout. I set it to static in Proxmox and now it boots instantly

            • ikidd@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 days ago

              I have no idea what you have going on, I’ve never seen LXCs take that long, even if I include the time it takes to down the containers and bring them up after a reboot.

              What are you using for running them? I just tested my docker LXC and it took 16 seconds from when I typed “reboot” to having a login prompt. And that’s on an ancient R410 server running proxmox.

    • lka1988@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      Not everything plays nice in Docker, and there are plenty of those services that also don’t need a full VM to operate. LXC is great for those edge cases. Otherwise I agree, a few VMs for various Docker stacks is the way to go.

  • AustralianSimon@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    5 days ago

    Personally I would keep it simple and just run a separate NAS and run all your services in containers across the devices best suited to them. The i3 is not going to manage for Jellyfin while sharing those other services. I tried running it on an N100 and had to move it to a beefier machine(i5). Immich for example will use a lot of resources when peforming operations, just a warning.

    If you mount a NAS storage for hosting the container data, you can move them between machines with minimal issues. Just make sure you run services using a docker-compose for them and keep them on the NAS.

    You completely negate the need for VMs and their overhead, can still snapshot the machine if you run debian as the OS there is timeshift. Other distros have similar.

    • merthyr1831@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      quicksync should let the i3 handle jellyfin just fine if you’re not going beyond 1080p for a couple of concurrent users. Especially if you configure the Nice values to prefer jellyfin over immich.

      I’m not aware of the platform for the n300 because it might be worth the initial setup, and have some room to upgrade the CPU later if it causes trouble.

      If OP is going for multiple systems, I’d definitely agree on making one of them a pure NAS and let a more upgradable system run the chunky stuff.

      • AustralianSimon@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        quicksync should let the i3 handle jellyfin just fine if you’re not going beyond 1080p for a couple of concurrent users. Especially if you configure the Nice values to prefer jellyfin over immich.

        Most of my content is 4K h264. You may be right on the 1080 but I don’t have content at that resolution generally.

        Worst case scenario he can always keep the N300 for other stuff if it doesn’t work out.

    • ikidd@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      4 days ago

      The advantages you gain with running a hypervisor on something like ZFS is immeasurable, for snapshotting, replication, snapshot backups and high availability. You don’t have to quiese machines to back them up and you can do instant COW snapshots before upgrades.

      KVM doesn’t really have overhead, that’s the kernel part. Maybe a bit of RAM, but with LXCs it’s negligible.

      • AustralianSimon@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        I didn’t think OP was going the ZFS route so it wouldn’t matter on that point.

        His Server 2 will be running on the red line imho so any overhead would have impact.