Proxmox VE (Virtual Environment) is a comprehensive open-source platform that seamlessly integrates compute, storage, and networking resources to manage virtualized servers efficiently. One of its remarkable features is its ability to manage Ceph storage directly from the Proxmox web GUI. Ceph is a powerful, scalable, and resilient software-defined storage solution, perfectly suitable for clustering environments.
This guide will walk you through setting up a Proxmox cluster with Ceph storage, assuming that you are starting with a fresh Proxmox installation.
Prerequisites
- A minimum of three nodes running Proxmox VE for the Ceph cluster.
- Each node should have at least one dedicated hard drive for Ceph storage.
- A stable Internet connection for all nodes.
- Basic knowledge of Linux command line and network configurations.
Step 1: Proxmox VE Installation
For each of your nodes, download the latest ISO image of Proxmox VE from the official website and install it. During the installation process, ensure that each node has a unique hostname and that all nodes are in the same network.
Step 2: Configuring the Network
Next, each Proxmox VE host needs to be configured to communicate with each other. Configure your network settings (either using DHCP or static IP addresses) and ensure all hosts can ping each other.
Step 3: Setting Up the Proxmox Cluster
After you have installed Proxmox VE on all nodes and configured the network, it’s time to set up your Proxmox cluster.
Select one of your nodes to create a new cluster. In the Proxmox GUI, go to Datacenter > Cluster > Create Cluster, give it a name, and follow the prompts.
For the other nodes, join the cluster by going to Datacenter > Cluster > Join Information. Copy the provided command and execute it in the shell of the node you want to add to the cluster.
Step 4: Installing Ceph
With the cluster set up, we can now proceed to install Ceph. Go to Datacenter > Storage > Add > Ceph, and follow the prompts to install Ceph.
Step 5: Creating Ceph Monitors and OSDs
In the Ceph setup, the monitor and OSD (Object Storage Daemon) servers are essential. They keep metadata about the cluster state, manage data replication, recovery, backfilling, and provide some metrics to Ceph’s Dashboard.
Go to Datacenter > Ceph > Create Ceph Monitor to create the monitor. Repeat this on all nodes.
Next, to create the OSDs, go to Datacenter > Ceph > Disks and select the disks you want to use for Ceph. Click Create: Ceph OSD and follow the prompts. Repeat this for all nodes.
Step 6: Configuring Ceph Storage Pool
Now, let’s create a Ceph storage pool. In the Proxmox GUI, navigate to Datacenter > Storage > Add > RBD. Provide a unique ID, select the pool type (we recommend “replicated” for most use-cases), choose the size (the number of replicas for each object in the pool), and select the newly created Ceph storage for the “Ceph pool” option.
Step 7: Verifying the Cluster
Now that everything is set up, let’s verify that the cluster is functioning as expected. You can check the status of the Ceph cluster in the Proxmox GUI by going to Datacenter > Ceph. You should see your storage, monitors, and OSDs, and all status lights should be green.
Step 8: Creating a Virtual Machine (VM)
To fully test the setup, create a new VM. Go to Create VM > Hard Disk > Storage, and select your Ceph storage. Install the VM as you would normally. If everything is set up correctly, your VM should run smoothly using the Ceph storage.
Conclusion
Setting up a Proxmox cluster with Ceph storage integration might seem intimidating initially, but with the right steps and guidance, it’s a powerful combination that offers great value to businesses and hobbyists alike. By successfully integrating these two platforms, users benefit from a seamless, high-performance, and resilient infrastructure that’s primed for virtualized environments. As with all technical endeavors, regular maintenance and monitoring will ensure the longevity and health of your system.
If you found this article helpful, you may want to look at our post about the recommended way to set up a Ceph Cluster.