

You can probably assign it to multiple VMs or containers, and if it’s not available then the VM or container will fail to start.
You can probably assign it to multiple VMs or containers, and if it’s not available then the VM or container will fail to start.
What’s in the log on the server?
Huh, I missed that.
Yes, and while borrowing contributes to their “we’re needed, please fund us” justification, it doesn’t directly support the artists.
Especially given that they’re owned by Epic Games
Does it not require an account for that? I would open a feature request if it doesn’t, else it creates a denial-of-service attack.
Where do you see that? The repo says you should be able to clone and run it.
No, it does not. An HBA (or a raid controller flashed to IT mode I think) presents the disks directly to the system. I’ve done this several times.
Just swap the raid controller for an HBA.
Right, you just need to make sure that the user inside the container has permission to the device. They cover this in the front page of the repo: https://github.com/TheoLeCalvar/peertube-plugin-hardware-transcode-vaapi?tab=readme-ov-file#running-the-docker-image
What exactly are you doing? You may not need to do this. I know you can use the group number instead of the name if it doesn’t exist in some cases.
Generally, preambles are not considered binding terms.
But this isn’t open source. It restricts you from using it for profit.
Yes, there is the internal subnet, but it’s not something you’re supposed to use directly.
You don’t need multiple devices and quorum unless you’re using HA. I have two nodes just so I can migrate back and forth when doing updates instead of shutting all the VMs down. No quorum, no HA.
If your request is showing up in nginx’s log, it means you can reach nginx. The upstream is where nginx is going to get the content you want. In your case, that should be the other containers.
I edited, since it was ambiguous. I think you only need zfs if you want replication, cold migrations should be fine without it.
Removing nodes from clusters is fine. It’s not really encouraged, but if a node fails you have to be able to remove it, so it’s possible.
The upstream refused the connection. You have it there as 127.0.0.1, but for inter-container communication usually you’d use the name. 127.0.0.1 would refer to the same container.
Docker (and other container platform) networking is a bit tricky and I’ll admit I don’t fully understand how it works, because it doesn’t work like regular networking. But a simple scenario like yours shouldn’t be difficult.
You could even just create a cluster and do a migration. I don’t think you need zfs for that, you only need zfs for replication.
Right, and when you said you “finally setup nginx”, that was a verb, and should have been “set up”.