I’ve added 2 external USBs of 2TB each to my Proxmox server and created a Resource Pool called USB_HDD containing both. I created an Ubuntu VM, but I can’t allocate all 4TB to it in one go - it only allows me to add each one as a separate SCSI device. When I start to install the OS it only allows the install onto one of the 2TB devices. I though the point of pools was to make the actual disks transparent, and present the pool to the VM so it sees it as one lot of space. Am I doing something wrong, or do I have to have it as 2x2TB disks?
I think the question is if you want to add the external USB hard drives to Proxmox so that not just the mentioned Ubuntu VM but other VMs and LXCs can benefit from them or if you want them to be added exclusively to the Ubuntu VM and Ubuntu VM only.
If it is the latter, you can leverage USB device passthrough and passthrough the two drives to the VM in whole. Then you can do whatever you want with them inside the VM. You can use ZFS or BTRFS or mdadm to create a stripe aka RAID 0 config.
As per the documentation, pools are basically just resource groups to make permission management easier, so they aren’t really supposed to handle anything like that. Maybe look into a RAID setup of some sort or mergerfs if you just need file level pooling.
Thanks for the responses; it seems I can’t really do it. I looked into ZFS but if I use that it halves the available disk space to 2TB. I’m using the VM for a media server and thought it would be better to have 1 4TB space instead of 2 2TB disks. At the end of the day it isn’t a big deal, I just thought I’d be able to present both disks as 1.
looked into ZFS but if I use that it halves the available disk space to 2TB.
Look harder :-)
ZFS can do it either way.
Indeed ZFS has support for stripe (RAID0 like)
vdevzpool since very early days.Would it be better to make single-disk vdevs and add them into a pool so you could add a mirror disk to each vdev later?
Oh right my fault. Stripe is done at the pool level. So the two disks will be their own vdevs. And then the two vdevs are added into one zpool.
Be aware in RAID0 if one of the HDD fails, the content on both will be lost lost.
I would suggest adding both disks to the VM, and then using mergerFS installed inside to pool the disks together for easier media storage.
Only if you never want to use those disks for anything else on that server
In this case ‘adding disks’ could mean either direct pass-through, or just adding a virtual disk on each physical disk in Proxmox so they can be used for other things as well.
ZFS can absolutely combine both into a 4TB single volume. Well like 3.78TB but you get the picture.
Note that, just like any other method which combines disks in a “RAID0” way, if one drive fails you lose everything on both drives.
But yes as others have said, “Pools” in Proxmox (or, more accurately, “Resource Pools”) are not related to storage but to permissions. A pool of VMs, of network bridges, yeah even storage but they’re not used for creating storage, only controlling access to it in an environment with lots of users (like in an enterprise).
ZFS and Ceph also use the term “Pool” but in a different context. They’re talking about a pool of combined storage, which is what you are looking for.
Put each USB disk into a ZFS vdev and then combine the two vdevs into a pool. You can add more drives into the vdev later to create mirrors within the vdev and get a RAID10-like setup once you have the means.
I believe a resource pool, is just a group of resources, so adding a resource pool of 2 devices is exactly the same as adding those 2 devices manually, (I normally use them as VM groups to backup though).
You might be able to do striping (combining 2 devices into 1) using ZFS in Proxmox, or I think LVM can but you’d be using the command line.
Bump
Sounds like you’re passing the physical HDDs to the VM instead of creating a new virtual HDD file in your proxmox pool.
If you don’t mind me asking, what are you trying to do with your VM?