I dunno I RMA’d my Nomad so many times.
🌨️ 💻
I dunno I RMA’d my Nomad so many times.
If budget is no object it’s only kind of a pain in the ass with Nvidia’s vGPU solutions for data centers. Even with $10 grand spent there’s hypervisor compatibility issues, license servers, compatibility challenges with drivers for games/consumer OS’s on hypervisors, and other inane garbage.
Consumer wise it’s technically the easiest it’s ever been with SRIOV support for hardware accelerating VMs on Intel 13 & 14 gen procs with iGPUs, however iGPU performance is kinda dogshit, drivers are wonky, and multiple display heads being passed through to VMs is weird for hypervisors.
On the docker side of things YMMV based on what you’re trying to accomplish. Technically nvidia container toolkit does support CUDA & display heads for containers: https://hub.docker.com/r/nvidia/vulkan/tags. I haven’t gotten it working yet, but this is the basis for my next set of experiments.
Are you running redundant routers, connections, ISPs…etc? Compromise is part of the design process. If you have resiliency requirements redundancy will help, but it ratchets up complexity and cost.
Security has the same kinds of compromises. I prefer to build security from the network up, leveraging tools like VLANs to start building the moat. Realistically, your reverse proxy is likely battle tested if it’s configured correctly and updated. It’ll probably be the most secure component in your stack. If that’s configured correctly and gets popped, half the Internet is already a wasteland.
If you’re running containers, yeah technically there are escape vectors, but again your attacker would need to pop the proxy software. It’d probably be way easier to go after the apps themselves.
Do something like this with NICs on each subnet:
DMZ VLAN <-> Proxy <-> Services VLAN
Double NIC on the proxy. One in each VLAN.
llama?