I recently switched from TrueNAS to Synology for my NAS. TrueNAS had served me well, but I no longer had the time to manage it effectively.

On this occasion, I decided to overhaul my entire home lab, which had gotten pretty messy over the years. As part of this overhaul, I will be discarding my old TrueNAS device due to its high power consumption and bulkiness. I will keep a NUC and another NUC alternative with slightly lower specs, but with 2+ LAN ports.

With this configuration, my plan is to use Proxmox on the NUC as the primary system and use the second NUC as a backup. The backup NUC however ha a dedicated connection via multiple LAN ports directly to the Synology NAS, so that would be ideal to storage intensive tasks.

My primary use case will be running containers and a few VMs for services like Git, Pi hole, backup services and more. Although my Synology NAS supports running containers and VMs, I prefer to keep things separate. I’ve already taken care of my infrastructure needs and won’t be hosting pfSense or similar services.

Since I haven’t looked into best practices lately, I’m very interested in learning new technologies like Ansible for automation.

I’m especially interested in understanding how to automate installs and updates while working with containers and VMs. I am considering whether to stay with Proxmox or go for a simpler distribution like Debian, Fedora or others.

Thanks for you insights!

  • Illiterate Domine@infosec.pub
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Little clusters of nucs has become a really common way to run small Kubernetes clusters at home. I recently rebuilt mine (still using a bulky, power hungry box like you’re tossing) and have been very happy with it. Everything is really stable, containers that misbehave are automatically destroyed and replaced, and updates are breeze because everything lives in code/git.

    • GreenDot 💚@le.fduck.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      What would be a benefit to run k8s at home, apart from bit dealing with it, compared to docker-compose on a single or two nodes? or docker swarm? Unless there is a big load of services that are selfhosted, which I get, and the autohealing from k8s as the orchestrator.

      Just courious, not taking a swing. Thanks!

      • psmt@lemmy.pcft.eu
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        K8s really shines when you start hosting more stuff, even on a single node. I definitely recommend giving k3s a try. I wouldn’t recommend it for only a couple of services though.

        Is it overkill? Yes, applying docker-compose manually also works. But then you still have to make your reverse proxy, your certificate and all your services work together. You can write Ansible for it, but then you end up with a lot of custom code to maintain and you still don’t get all the nice features.

        For me the killer feature was flux. Your code, configs and even secrets live in git and get autodeployed and autohealed. And it has other features such as operators to fetch helm charts from other repos and apply your config to it.

        • GreenDot 💚@le.fduck.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Thanks for the reply, flux is pretty good, I’m using ArgoCD, but both are basically following gitops priciples.

          I might give k3s a look and see how ot all work together.

    • Meow.tar.gz@lemmy.goblackcat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I am in the sort of power hungry box category. The cost of NUCs have gone up in my area. I’ve found that the big box stuffed with second hand hardware to offer more value for my use case.