Do you have any trouble with cooling or anything with them? Got like a billion unused PCIe lanes in my Dell R730 and can think of a few things that might benefit from a big NVMe ZFS pool.
SirNuke
- 1 Post
- 9 Comments
SirNuke@kbin.socialOPto Selfhosted@lemmy.world•What's a good, cheap, no external power GPU to buy for VMs? Want to chuck a few in my Dell R730 server to make my desktop VMs more usable. Right now have an old K620 for a Windows VM, seems like 1030s2·2 years ago@TrenchcoatFullofBats I think this is the winning answer. Looks like it’s about a 1060 6GB, which should be enough horsepower for several desktop VMs, and keeps open my full profile slots should I ever want to install something even more powerful in the future. vGPU support is also nice so I don’t have to juggle which VM gets which GPU.
SirNuke@kbin.socialOPto Selfhosted@lemmy.world•What's a good, cheap, no external power GPU to buy for VMs? Want to chuck a few in my Dell R730 server to make my desktop VMs more usable. Right now have an old K620 for a Windows VM, seems like 1030s1·2 years ago@Nilz Do you know if the WX 5100 supports SR-VIO? Getting mixed answers about what if any AMD GPUs support it, but having VMs share a single physical GPU would be a perfect solution.
SirNuke@kbin.socialOPto Selfhosted@lemmy.world•What's a good, cheap, no external power GPU to buy for VMs? Want to chuck a few in my Dell R730 server to make my desktop VMs more usable. Right now have an old K620 for a Windows VM, seems like 1030s1·2 years ago@Nugget Yeah an older Quadro like the P600 is the fallback option. Looks like they run about $50 used on eBay.
SirNuke@kbin.socialOPto Selfhosted@lemmy.world•What's a good, cheap, no external power GPU to buy for VMs? Want to chuck a few in my Dell R730 server to make my desktop VMs more usable. Right now have an old K620 for a Windows VM, seems like 1030s1·2 years agoActually I lied, according to the Dell manual the full profile slots have a connector that provides PCIe power though I’d have to buy a cable for it. Long term the answer might be to get a used V100 and dive deep into the vGPU rabbit hole (erp).
SirNuke@kbin.socialOPto Selfhosted@lemmy.world•What's a good, cheap, no external power GPU to buy for VMs? Want to chuck a few in my Dell R730 server to make my desktop VMs more usable. Right now have an old K620 for a Windows VM, seems like 1030s1·2 years ago@JustEnoughDucks I am planning on getting an Intel Arc for my Jellyfin server at some point. Have an old Dell SFF with a 8700 that I think I’ll eventually stuff into a 2U chassis. It’s probably overkill for my VM server though, since the VMs really just need to not lag in desktop application work (aka IntelliJ) and play Youtube videos without obvious framing.
Only issue I had with a similar setup is turns out the old HP desktop I bought didn’t support VT-d on the chipset, only on the CPU. Had do some crazy hacks to get it to forward a 10gbe NIC plugged into the x16 slot.
Then I discovered the NIC I had was just old enough (ConnectX-3) that getting it to properly forward was finicky, so I had to buy a much more expensive ConnectX-4. My next task is to see if I can give it a virtual NIC, have OPNsense only listen to web requests on that interface, and use the host’s Nginx reverse proxy container for SSL.
SirNuke@kbin.socialto Selfhosted@lemmy.world•If anyone is near MN MyPillow is aucioning off some server equipment7·2 years agoIf you live by a major university, they likely have property disposition where you can pick up slightly older equipment, sometimes for super cheap.
I’ve found the idea of LXC containers to be better than they are in practice. I’ve migrated all of my servers to Proxmox and have been trying to move various services from VMs to LXC containers and it’s been such a hassle. You should be able to directly forward disk block devices, but just could not get them to mount for an MinIO array - ended up just setting their entire contents to 100000:100000 and mounting them on the host and forwarding the mount point instead. Never managed to CAP_IPC_LOCK to work correctly for a HashiCorp Vault install. Docker in LXC has some serious pain points and feels very fragile.
It’s damning that every time I have a problem with LXC the first search result will be a Proxmox forum topic with a Proxmox employee replying to the effect of “we recommend VMs over LXC for this use case” - Proxmox doesn’t seem to recommend LXC for anything. Proxmox + LXC is definitely better than CentOS + Podman, but my heart longs for the sheer competence of FreeBSD Jails.