deleted by creator
deleted by creator
You could possibly run ai horde if they have enough ram or vram. You could run bare metal kubernetes or inside proxmox.
OP image is blurry for me. Direct link
I am so grateful for snapshotting file systems like ZFS. Restore the last working snapshot and continue on.
Great write up, glad to see mention of nibble (my favorite lol)… You forgot to mention byte order (Little/Big Endian).
Honestly I just moved back to local accounts. I’m interested in the other comments on this post for a good solution to move to.
Does that work with gitea? I was able to get it working with Authentik but wasn’t able to get it working on Keycloak.
FYI docker engine can use different runtimes and there is are lightweight vm runtimes like kata or firecracker. I hope one day docker will default with that technology as it would be better for the overall security of containers.
I have mixed architecture cluster as well. It works great as long as you set your manifests up properly and either use public images that support both or you build your own, or you set up node affinity to ensure the architecture-specific pod runs only on the node with the correct architecture.
Other than k3s.io’s documentation and tailscale’s documentation, I don’t have any to share, but I don’t mind answering questions if you are stuck.
https://docs.k3s.io/installation https://tailscale.com/kb/1017/install
Install tailscale and k3s on the master node and worker nodes. I have a setup like this and it works well. I have nodes in different physical locations from the master node, it works fine.
A client that has the messages could sync them to a new client by reencrypting them. You say there is no way to do that, but how would the client decrypt them to show the user if that was true?
Very insightful. I definitely need to check out cloud-init as that is one thing you mentioned I have practically no experience with. Side note, I hate other people’s helm with a passion. No consistency in what is exposed, anything not cookie cutter and you’re customizing the helm chart to the point it’s probably easier to start with a custom template to begin with, which is what I started doing!
You urge teams to stop using it [ansible?] as soon as they can? What do you recommend to use instead?
Dynamic inventory. I haven’t used it on a cloud api before but I have used it against kube API and it was manageable. Are you saying through kubectl the node names are different depending on which cloud and it’s not uniform? Edit: Oh you’re talking about the VMs doh
I’ve tried ansible vault and didn’t make it very far… I agree that thing is a mess.
Thank god I haven’t ran into interpreter issues, that sounds like hell.
Ansible output is terrible, no argument there.
I don’t remember the name for it, but I use parameterized template tasks. That might help with this? Edit: include_tasks.
I think this is due to not a very good IDE for including the whole scope of the playbook, which could be a condemnation of ansible or just needing better abstraction layers for this complex thing we are trying to manage the unmanageable with.
I have noticed very slow speeds with sshfs as well. I’ll have to give rclone mount over ssh a try. Thanks!
How do you do the sshfs mount, tracker and search queries? Is that over tailscale?
Care to share some war stories? I have it set up where I can completely destroy and rebuild my bare metal k3s cluster. If I start with configured hosts, it takes about 10 minutes to install k3s and get all my services back up.
What is the tmpfs for?