NAME
pveceph - Manage Ceph Services on Proxmox VE Nodes
SYNOPSIS
pveceph <COMMAND> [ARGS] [OPTIONS]
pveceph createmon
Create Ceph Monitor
pveceph createosd <dev> [OPTIONS]
Create OSD
- <dev>: <string>
-
Block device name.
- -bluestore <boolean> (default = 0)
-
Use bluestore instead of filestore.
- -fstype <btrfs | ext4 | xfs> (default = xfs)
-
File system type (filestore only).
- -journal_dev <string>
-
Block device name for journal.
pveceph createpool <name> [OPTIONS]
Create POOL
- <name>: <string>
-
The name of the pool. It must be unique.
- -crush_ruleset <integer> (0 - 32768) (default = 0)
-
The ruleset to use for mapping object placement in the cluster.
- -min_size <integer> (1 - 7) (default = 1)
-
Minimum number of replicas per object
- -pg_num <integer> (8 - 32768) (default = 64)
-
Number of placement groups.
- -size <integer> (1 - 7) (default = 2)
-
Number of replicas per object
pveceph destroymon <monid>
Destroy Ceph monitor.
- <monid>: <integer>
-
Monitor ID
pveceph destroyosd <osdid> [OPTIONS]
Destroy OSD
- <osdid>: <integer>
-
OSD ID
- -cleanup <boolean> (default = 0)
-
If set, we remove partition table entries.
pveceph destroypool <name> [OPTIONS]
Destroy pool
- <name>: <string>
-
The name of the pool. It must be unique.
- -force <boolean> (default = 0)
-
If true, destroys pool even if in use
pveceph help [<cmd>] [OPTIONS]
Get help about specified command.
- <cmd>: <string>
-
Command name
- -verbose <boolean>
-
Verbose output format.
pveceph init [OPTIONS]
Create initial ceph default configuration and setup symlinks.
- -disable_cephx <boolean> (default = 0)
-
Disable cephx authentification.
cephx is a security feature protecting against man-in-the-middle attacks. Only consider disabling cephx if your network is private! - -min_size <integer> (1 - 7) (default = 2)
-
Minimum number of available replicas per object to allow I/O
- -network <string>
-
Use specific network for all ceph related traffic
- -pg_bits <integer> (6 - 14) (default = 6)
-
Placement group bits, used to specify the default number of placement groups.
osd pool default pg num does not work for default pools. - -size <integer> (1 - 7) (default = 3)
-
Targeted number of replicas per object
pveceph install [OPTIONS]
Install ceph related packages.
- -version <luminous>
-
no description available
pveceph lspools
List all pools.
pveceph purge
Destroy ceph related data and configuration files.
pveceph start [<service>]
Start ceph services.
- <service>: (mon|mds|osd)\.[A-Za-z0-9]{1,32}
-
Ceph service name.
pveceph status
Get ceph status.
pveceph stop [<service>]
Stop ceph services.
- <service>: (mon|mds|osd)\.[A-Za-z0-9]{1,32}
-
Ceph service name.
DESCRIPTION

Proxmox VE unifies your compute and storage systems, i.e. you can use the same physical nodes within a cluster for both computing (processing VMs and containers) and replicated storage. The traditional silos of compute and storage resources can be wrapped up into a single hyper-converged appliance. Separate storage networks (SANs) and connections via network (NAS) disappear. With the integration of Ceph, an open source software-defined storage platform, Proxmox VE has the ability to run and manage Ceph storage directly on the hypervisor nodes.
Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. For smaller deployments, it is possible to install a Ceph server for RADOS Block Devices (RBD) directly on your Proxmox VE cluster nodes, see Ceph RADOS Block Devices (RBD). Recent hardware has plenty of CPU power and RAM, so running storage services and VMs on the same node is possible.
To simplify management, we provide pveceph - a tool to install and manage Ceph services on Proxmox VE nodes.
Precondition
To build a Proxmox Ceph Cluster there should be at least three (preferably) identical servers for the setup.
A 10Gb network, exclusively used for Ceph, is recommmended. A meshed network setup is also an option if there are no 10Gb switches available, see wiki .
Check also the recommendations from Ceph’s website.
Installation of Ceph Packages
On each node run the installation script as follows:
pveceph install
This sets up an apt package repository in /etc/apt/sources.list.d/ceph.list and installs the required software.
Creating initial Ceph configuration

After installation of packages, you need to create an initial Ceph configuration on just one node, based on your network (10.10.10.0/24 in the following example) dedicated for Ceph:
pveceph init --network 10.10.10.0/24
This creates an initial config at /etc/pve/ceph.conf. That file is automatically distributed to all Proxmox VE nodes by using pmxcfs. The command also creates a symbolic link from /etc/ceph/ceph.conf pointing to that file. So you can simply run Ceph commands without the need to specify a configuration file.
Creating Ceph Monitors
Creating Ceph OSDs
pveceph createosd /dev/sd[X]
If you want to use a dedicated SSD journal disk:
|
In order to use a dedicated journal disk (SSD), the disk needs to have a GPT partition table. You can create this with gdisk /dev/sd(x). If there is no GPT, you cannot select the disk as journal. Currently the journal size is fixed to 5 GB. |
pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]
Example: Use /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD journal disk.
pveceph createosd /dev/sdf -journal_dev /dev/sdb
This partitions the disk (data and journal partition), creates filesystems and starts the OSD, afterwards it is running and fully functional. Please create at least 12 OSDs, distributed among your nodes (4 OSDs on each node).
It should be noted that this command refuses to initialize disk when it detects existing data. So if you want to overwrite a disk you should remove existing data first. You can do that using:
ceph-disk zap /dev/sd[X]
You can create OSDs containing both journal and data partitions or you can place the journal on a dedicated SSD. Using a SSD journal disk is highly recommended if you expect good performance.
Ceph Pools
Ceph Client

You can then configure Proxmox VE to use such pools to store VM or Container images. Simply use the GUI too add a new RBD storage (see section Ceph RADOS Block Devices (RBD)).
You also need to copy the keyring to a predefined location.
|
The file name needs to be <storage_id> + `.keyring - <storage_id> is the expression after rbd: in /etc/pve/storage.cfg which is my-ceph-storage in the following example: |
mkdir /etc/pve/priv/ceph cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
Copyright and Disclaimer
Copyright © 2007-2017 Proxmox Server Solutions GmbH
This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License along with this program. If not, see http://www.gnu.org/licenses/