Войти
  • 40553Просмотров
  • 2 года назадОпубликованоapalrd's adventures

All about POOLS | Proxmox + Ceph Hyperconverged Cluster fäncy Configurations for RBD

In this video, I expand on the last video of my hyper-converged Proxmox + Ceph cluster to create more custom pool layouts than Proxmox's GUI allows. This includes setting the storage class (HDD / SSD / NVME), failure domain, and even erasure coding of pools. All of this is then setup as a storage location in Proxmox for RBD (RADOS Block Device), so we can store VM disks on it. After all of this, I now have the flexibility to assign VM disks to HDDs or SSDs, and use erasure coding to get 66% storage efficiency instead of 33% (doubling my usable capacity for the same disks!). With more nodes and disks, I could improve both the storage efficiency and failure resilience of my cluster, but with only the small number of disks I have, I opted to go for a basic 2+1 erasure code. Blog post for this video (tbh not a whole lot there): My Discord Server, where you can chat about any of this: If you find my content useful and would like to support me, feel free to here: This video is part of my Hyperconverged Cluster Megaproject: Find an HP Microserver Gen8 like mine on eBay (maybe you'll get lucky idk): Timestamps: 00:00 - Introduction 01:11 - Cluster Overview 02:16 - Ceph Web Manager 03:10 - Failure Domains 04:24 - Custom CRUSH Ruleset 06:18 - Storage Efficiency & Erasure Codes 08:00 - Creating Erasure Coded Pools 12:35 - Results Some links to products may be affiliate links, which may earn a commission for me.