Войти
  • 186520Просмотров
  • 2 года назадОпубликованоVirtualizationHowto

Proxmox 8 Cluster with Ceph Storage configuration

Are you looking to setup a server cluster in your home lab? Proxmox is a great option along with Ceph storage. In this video we take a deep dive into Proxmox clustering and how to configure a 3-node Proxmox server cluster with Ceph shared storage, along with Ceph OSDs, Monitors, and Ceph storage Pool. In the end I test a migration of a Windows Server 2022 virtual machine between Proxmox nodes using the shared Ceph storage. Cool stuff. 3 Proxmox Builds below: Minisforum BD795M – RackChoice 2U Micro ATX Compact – Cooler Master MWE Gold 850 V2 – Crucial 128 GB 5600MT/sec RAM – Noctua NH-L9i-17xx, Premium Low-Profile CPU Cooler – Crucial 96GB kit of DDR5 SODIMM memory kit – Intel X520-DA2 10 GbE network adapter – Kingston 240 GB drive for boot – Samsung EVO 990 Pro 2TB – MX-4 Thermal paste – Case fans: BD795i SE: Minisforum BD795i SE mini-ITX motherboard – RackChoice 2U Micro ATX Compact Rackmount – Coolermaster MWE Gold 850 – Crucial 128 GB 5600MT/sec RAM – Kingston 240 GB drive for boot – Intel X520-DA2 10 GbE network adapter – Samsung EVO 990 Pro 2TB – Artic P12 slim fan – BD790i X3D: Thermaltake Tower 200: Minisforum BD790i X3D: Cooler Master MWE Gold 850 V2 – Crucial 128 GB 5600MT/sec RAM – Artic P12 slim fan – Predator 1TB boot drive: Samsung 990 Pro 4TB VM storage: ★ Subscribe to the channel: ★ My blog: ★ Twitter: ★ LinkedIn: ★ Github: ★ Facebook: ★ Discord: Introduction to running a Proxmox server cluster - 0:00 Talking about Promox, open-source hypervisors, etc - 0:48 Thinking about high-availability requires thinking about storage - 1:20 Overview of creating a Proxmox 8 cluster and Ceph - 2:10 Beginning the process to configure a Proxmox 8 cluster - 2:24 Looking at the create cluster operation - 3:03 Kicking off the cluster creation process - 3:25 Join information to use with the member nodes to join the cluster - 3:55 Joining the cluster on another node and entering the root password - 4:15 Joining the 3rd node to the Proxmox 8 cluster - 5:13 Refreshing the browser and checking that we can see all the Proxmox nodes - 5:40 Overview of Ceph - 6:11 Distributed file system and sharing storage between the logical storage volume - 6:30 Beginning the installation of Ceph on the Proxmox nodes - 6:52 Changing the repository to the no subscription model - 7:30 Verify the installation of Ceph - 7:51 Selecting the IP subnet available under Public network and Cluster network - 8:06 Looking at the replicas configuration - 8:35 Installation is successful and looking at the checklist to install Ceph on other nodes - 8:50 The Ceph Object Storage Daemon (OSD) - 9:27 Creating the OSD and designating the disk in our Proxmox hosts for Ceph - 9:50 Selecting the disk for the OSD - 10:15 Creating OSD on node 2 - 10:40 Creating OSD on node 3 - 11:00 Looking at the Ceph dashboard and health status - 11:25 Creating the Ceph pool - 11:35 All Proxmox nodes display the Ceph pool - 12:00 Ceph Monitor overview - 12:22 Beginning the process to create additional monitors - 13:00 Setting up the test for live migration using Ceph storage - 13:30 Beginning a continuous ping - 14:00 The VM is on the Ceph storage pool - 14:25 Kicking off the migration - 14:35 Only the memory map is copied between the two Proxmox hosts - 14:45 Distributed shared storage is working between the nodes - 15:08 Nested configuration in my lab but still works great - 15:35 Concluding thoughts on Proxmox clustering in Proxmox 8 and Ceph for shared storage - 15:49