That’s a nice setup. I am weirdly jealous of the sliding shelf. The CS350B is very nice as well.
That’s a nice setup. I am weirdly jealous of the sliding shelf. The CS350B is very nice as well.
Heat, then suction?
On a related note, I solved the battery issue with my wall mounted Fire tablet (for an HA dashboard) by connecting the power supply to a smart plug and setting up an automation to only give it the juice for about 3 hours per day, spread throughout the day
From top to bottom:
You can also add tags that are searchable
I use several separate small servers in a Proxmox cluster. You can get a used Dell or HP SFF PC from eBay for cheap (example). The ones I am using all came with Intel T series processors that run at 35w.
You install Proxmox like any other OS (it’s basically Debian), then you can create VMs (or LXCs) to run whatever services you want.
If you have existing drives in a media server, you can pass those drives through to a VM pretty easily, or any PCI device, or even the entire PCI controller.
They also only pull 75w, which is an added bonus.
You may want to check out Craft Computing’s YT channel - he did a few episodes (Piped link) in his Cloud Gaming series on these cards.
Nvidia Tesla P4. Under $100 for a new one on eBay. Comes with a low profile bracket.
If you’re running Proxmox, you can even get the official vGPU drivers running so you can split the card between multiple VMs.
Is there a window in the room the closet is in? I’ve got a similar setup with a server rack in a closet (no ventilation, though). I recently purchased an in-window Midea AC that can be controlled by Home Assistant.
I have an automation that will kick on the AC if the temperature in the closet rises above a certain amount, and will shut down when it drops below that amount. I just leave the closet door open by about a foot and that seems to be sufficient.
It’s probably worth noting that I’m running pretty efficient hardware (35w i7s and a 75w Tesla P4) so it doesn’t get super hot, even under heavy load.
I’ve been daily driving a Debian 11 Proxmox VM running on an HP ProDesk Elite SFF with an i7-6700T and an ancient Nvidia GeForce GT 730 passed through.
I access it via ThinLinc running on a Dell Wyse 5070 Extended thin client. Works really well, even video isn’t bad, but it’s not for gaming.
For gaming, I’m working on setting up a Nobara VM with an Nvidia Tesla P4 passed through.
This is the correct answer.
Run an *arr stack somewhere on your network, install Jellyfin on the server and the Jellyfin app on the Shield and you’re golden, no need for subscriptions.
Desktops and PCs are just OS name and version. Proxmox cluster is Ankh-Morpork (from Disc world) and nodes are Ankh Morpork street names: Treacle Mine, Pseudopolis Yard, Attic Bee, etc.
Just an FYI to OP: If you’re looking to run docker containers, you should know that Proxmox specifically does NOT support running docker in an LXC, as there is a very good chance that stuff will break when you upgrade. You should really only run docker containers in VMs with Proxmox.
Just for completeness sake - We don’t recommend running docker inside of a container (precisely because it causes issues upon upgrades of Kernel, LXC, Storage packages) - I would install docker inside of a Qemu VM as this has fewer interaction with the host system and is known to run far more stable.
As far as I’m aware, everything in Proxmox is open source.
I think some people get annoyed by the Red Hat style paid support model, though. There is a separate repo for paying customers, but the non-subscription repo is just fine, and the official forums are a great place to get support, including from Proxmox employees.
I haven’t done it myself, but I have looked into the process in the past. I believe you do it just like paying any drive through to any Proxmox VM.
It’s fairly simple - you can either pass the entire drive itself through to the VM, or if you have a controller card the drive is attached to, you can pass that entire PCIe device through to the VM and the drive will just “come with it”.
I would say it’s at the “bottom” of the stack - Debian is the base layer, then Proxmox, then your VMs.
Clustering just lets the different nodes share resources (more options with ZFS) and allows management of all nodes in the cluster from the same GUI.
Another vote for Proxmox.
Backups: Proxmox Backup Server (yes, it can run in a Proxmox VM) is pretty great. You can use something like Duplicati to backup the PBS datastore to B2.
Performance: You can use ZFS in Proxmox, or not. ZFS gets you things like snapshots and raidz, but you will want to make sure you have a good amount of RAM available and that you leave about 20% of available drive space free. This is a good resource on ZFS in Proxmox.
Performance-wise, I have clusters with drives running ZFS and EXT4, and I haven’t really noticed much of a difference. But I’m also running low-powered SFF servers, so I’m not doing anything that requires a lot of heavy duty work.
Yeah, the tablet runs Fully Kiosk and I tried the same thing with the battery percentage thing and ran into the same issue, so I just simplified and made the automation time-based.
The tablet also likes to freeze a few times a day, so I also created an automation that toggles the smart plug power whenever HA loses connection to the tablet for more than 5 seconds, then toggles back to the original state at the start of the automation, which corrects the problem. Until the next time. But hey! It was only $60, so it’s fine.