Currently, I run Unraid and have all of my services’ setup there as docker containers. While this is nice and easy to setup initially, it has some major downsides:

  • It’s fragile. Unraid is prone to bugs/crashes with docker that take down my containers. It’s also not resilient so when things break I have to log in and fiddle.
  • It’s mutable. I can’t use any infrastructure-as-code tools like terraform, and configuration sort of just exist in the UI. I can’t really roll back or recover easily.
  • It’s single-node. Everything is tied to my one big server that runs the NAS, but I’d rather have the NAS as a separate fairly low-power appliance and then have a separate machine to handle things like VMs and containers.

So I’m looking ahead and thinking about what the next iteration of my homelab will look like. While I like unraid for the storage stuff, I’m a little tired of wrangling it into a container orchestrator and hypervisor, and I think this year I’ll split that job out to a dedicated machine. I’m comfortable with, and in fact prefer, IaC over fancy UIs and so would love to be able to use terraform or Pulumi or something like that. I would prefer something multi-node, as I want to be able to tie multiple machines together. And I want something that is fault-tolerant, as I host services for friends and family that currently require a lot of manual intervention to fix when they go down.

So the question is: how do you all do this? Kubernetes, docker-compose, Hashicorp Nomad? Do you run k3s, Harvester, or what? I’d love to get an idea of what people are doing and why, so I can get some ideas as to what I might do.

  • johntash@eviltoast.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    What I do right now is I have a rclone sidecar container that uploads files in a directory every few seconds, and I also have another init sidecar that runs before the main application and downloads those files (incl sqlite dbs) to the normal disk. This works okay but feels pretty clunky and can still result in stuff getting corrupted because I’m just backing up the db files and not using any sqlite commands to actually back up the db to another file that isn’t in-use first.

    How do you handle a job going from one nomad node to another? Or do you pin jobs like grafana to specific hosts?

    • nico@r.dcotta.eu
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Nomad has host volumes - so you can tell it to mount a folder from the machine on the container, and it will only schedule that container on machines that have that folder. So yes, effectively you pin the workload, thus introducing a SPOF - I do not love it but Grafana only supports sqlite and postgres, so making those HA would require failover setups which is a bit much for a homelab :')

      For backing up, you can use the sqlite command periodically (do cron job or Nomad periodic job) and then upload the backup to some external, safe storage (could be seaweedfs or S3!). For postgres you can use something like this.