8 Deploying the remaining core services using cephadm #
After deploying the basic Ceph cluster, deploy core services to more cluster nodes. To make the cluster data accessible to clients, deploy additional services as well.
Currently, we support deployment of Ceph services on the command line by
using the Ceph orchestrator (ceph orch
subcommands).
8.1 The ceph orch
command #
The Ceph orchestrator command ceph orch
—which is
an interface to the cephadm module—will take care of listing cluster
components and deploying Ceph services on new cluster nodes.
8.1.1 Displaying the orchestrator status #
The following command shows the current mode and status of the Ceph orchestrator.
cephuser@adm >
ceph orch status
8.1.2 Listing devices, services, and daemons #
To list all disk devices, run the following:
cephuser@adm >
ceph orch device ls
Hostname Path Type Serial Size Health Ident Fault Available
ses-master /dev/vdb hdd 0d8a... 10.7G Unknown N/A N/A No
ses-min1 /dev/vdc hdd 8304... 10.7G Unknown N/A N/A No
ses-min1 /dev/vdd hdd 7b81... 10.7G Unknown N/A N/A No
[...]
Service is a general term for a Ceph service of a specific type, for example Ceph Manager.
Daemon is a specific instance of a service, for
example a process mgr.ses-min1.gdlcik
running on a node
called ses-min1
.
To list all services known to cephadm, run:
cephuser@adm >
ceph orch ls
NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID
mgr 1/0 5m ago - <no spec> registry.example.com/[...] 5bf12403d0bd
mon 1/0 5m ago - <no spec> registry.example.com/[...] 5bf12403d0bd
You can limit the list to services on a particular node with the optional
-–host
parameter, and services of a particular type with
the optional --service-type
parameter. Acceptable types
are mon
, osd
,
mgr
, mds
, and
rgw
.
To list all running daemons deployed by cephadm, run:
cephuser@adm >
ceph orch ps
NAME HOST STATUS REFRESHED AGE VERSION IMAGE ID CONTAINER ID
mgr.ses-min1.gd ses-min1 running) 8m ago 12d 15.2.0.108 5bf12403d0bd b8104e09814c
mon.ses-min1 ses-min1 running) 8m ago 12d 15.2.0.108 5bf12403d0bd a719e0087369
To query the status of a particular daemon, use
--daemon_type
and --daemon_id
. For OSDs,
the ID is the numeric OSD ID. For MDS, the ID is the file system name:
cephuser@adm >
ceph orch ps --daemon_type osd --daemon_id 0cephuser@adm >
ceph orch ps --daemon_type mds --daemon_id my_cephfs
8.2 Service and placement specification #
The recommended way to specify the deployment of Ceph services is to create a YAML-formatted file with the specification of the services that you intend to deploy.
8.2.1 Creating service specifications #
You can create a separate specification file for each type of service, for example:
root@master #
cat nfs.yml
service_type: nfs
service_id: EXAMPLE_NFS
placement:
hosts:
- ses-min1
- ses-min2
spec:
pool: EXAMPLE_POOL
namespace: EXAMPLE_NAMESPACE
Alternatively, you can specify multiple (or all) service types in one
file—for example, cluster.yml
—that
describes which nodes will run specific services. Remember to separate
individual service types with three dashes (---
):
cephuser@adm >
cat cluster.yml
service_type: nfs
service_id: EXAMPLE_NFS
placement:
hosts:
- ses-min1
- ses-min2
spec:
pool: EXAMPLE_POOL
namespace: EXAMPLE_NAMESPACE
---
service_type: rgw
service_id: REALM_NAME.ZONE_NAME
placement:
hosts:
- ses-min1
- ses-min2
- ses-min3
---
[...]
The aforementioned properties have the following meaning:
service_type
The type of the service. It can be either a Ceph service (
mon
,mgr
,mds
,crash
,osd
, orrbd-mirror
), a gateway (nfs
orrgw
), or part of the monitoring stack (alertmanager
,grafana
,node-exporter
, orprometheus
).service_id
The name of the service. Specifications of type
mon
,mgr
,alertmanager
,grafana
,node-exporter
, andprometheus
do not require theservice_id
property.placement
Specifies which nodes will be running the service. Refer to Section 8.2.2, “Creating placement specification” for more details.
spec
Additional specification relevant for the service type.
Ceph cluster services have usually a number of properties specific to them. For examples and details of individual services' specification, refer to Section 8.3, “Deploy Ceph services”.
8.2.2 Creating placement specification #
To deploy Ceph services, cephadm needs to know on which nodes to deploy
them. Use the placement
property and list the short host
names of the nodes that the service applies to:
cephuser@adm >
cat cluster.yml
[...]
placement:
hosts:
- host1
- host2
- host3
[...]
8.2.3 Applying cluster specification #
After you have created a full cluster.yml
file with
specifications of all services and their placement, you can apply the
cluster by running the following command:
cephuser@adm >
ceph orch apply -i cluster.yml
To view the status of the cluster, run the ceph orch
status
command. For more details, see
Section 8.1.1, “Displaying the orchestrator status”.
8.2.4 Exporting the specification of a running cluster #
Although you deployed services to the Ceph cluster by using the specification files as described in Section 8.2, “Service and placement specification”, the configuration of the cluster may diverge from the original specification during its operation. Also, you may have removed the specification files accidentally.
To retrieve a complete specification of a running cluster, run:
cephuser@adm >
ceph orch ls --export
placement:
hosts:
- hostname: ses-min1
name: ''
network: ''
service_id: my_cephfs
service_name: mds.my_cephfs
service_type: mds
---
placement:
count: 2
service_name: mgr
service_type: mgr
---
[...]
You can append the --format
option to change the default
yaml
output format. You can select from
json
, json-pretty
, or
yaml
. For example:
ceph orch ls --export --format json
8.3 Deploy Ceph services #
After the basic cluster is running, you can deploy Ceph services to additional nodes.
8.3.1 Deploying Ceph Monitors and Ceph Managers #
Ceph cluster has three or five MONs deployed across different nodes. If there are five or more nodes in the cluster, we recommend deploying five MONs. A good practice is to have MGRs deployed on the same nodes as MONs.
When deploying MONs and MGRs, remember to include the first MON that you added when configuring the basic cluster in Section 7.2.5, “Specifying first MON/MGR node”.
To deploy MONs, apply the following specification:
service_type: mon placement: hosts: - ses-min1 - ses-min2 - ses-min3
If you need to add another node, append the host name to the same YAML list. For example:
service_type: mon placement: hosts: - ses-min1 - ses-min2 - ses-min3 - ses-min4
Similarly, to deploy MGRs, apply the following specification:
Ensure your deployment has at least three Ceph Managers in each deployment.
service_type: mgr placement: hosts: - ses-min1 - ses-min2 - ses-min3
If MONs or MGRs are not on the same subnet, you need to append the subnet addresses. For example:
service_type: mon placement: hosts: - ses-min1:10.1.2.0/24 - ses-min2:10.1.5.0/24 - ses-min3:10.1.10.0/24
8.3.2 Deploying Ceph OSDs #
A storage device is considered available if all of the following conditions are met:
The device has no partitions.
The device does not have any LVM state.
The device is not be mounted.
The device does not contain a file system.
The device does not contain a BlueStore OSD.
The device is larger than 5 GB.
If the above conditions are not met, Ceph refuses to provision such OSDs.
There are two ways you can deploy OSDs:
Tell Ceph to consume all available and unused storage devices:
cephuser@adm >
ceph orch apply osd --all-available-devicesUse DriveGroups (see Section 13.4.3, “Adding OSDs using DriveGroups specification”) to create OSD specification describing devices that will be deployed based on their properties, such as device type (SSD or HDD), device model names, size, or the nodes on which the devices exist. Then apply the specification by running the following command:
cephuser@adm >
ceph orch apply osd -i drive_groups.yml
8.3.3 Deploying Metadata Servers #
CephFS requires one or more Metadata Server (MDS) services. To create a CephFS, first create MDS servers by applying the following specification:
Ensure you have at least two pools, one for CephFS data and one for CephFS metadata, created before applying the following specification.
service_type: mds service_id: CEPHFS_NAME placement: hosts: - ses-min1 - ses-min2 - ses-min3
After MDSs are functional, create the CephFS:
ceph fs new CEPHFS_NAME metadata_pool data_pool
8.3.4 Deploying Object Gateways #
cephadm deploys an Object Gateway as a collection of daemons that manage a particular realm and zone.
You can either relate an Object Gateway service to already existing realm and zone, (refer to Section 21.13, “Multisite Object Gateways” for more details), or you can specify a non-existing REALM_NAME and ZONE_NAME and they will be created automatically after you apply the following configuration:
service_type: rgw service_id: REALM_NAME.ZONE_NAME placement: hosts: - ses-min1 - ses-min2 - ses-min3 spec: rgw_realm: RGW_REALM rgw_zone: RGW_ZONE
8.3.4.1 Using secure SSL access #
To use a secure SSL connection to the Object Gateway, you need a pair of valid SSL certificate and key files (see Section 21.7, “Enable HTTPS/SSL for Object Gateways” for more details). You need to enable SSL, specify a port number for SSL connections, and the SSL certificate and key files.
To enable SSL and specify the port number, include the following in your specification:
spec: ssl: true rgw_frontend_port: 443
To specify the SSL certificate and key, you can paste their contents
directly into the YAML specification file. The pipe sign
(|
) at the end of line tells the parser to expect a
multi-line string as a value. For example:
spec: ssl: true rgw_frontend_port: 443 rgw_frontend_ssl_certificate: | -----BEGIN CERTIFICATE----- MIIFmjCCA4KgAwIBAgIJAIZ2n35bmwXTMA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV BAYTAkFVMQwwCgYDVQQIDANOU1cxHTAbBgNVBAoMFEV4YW1wbGUgUkdXIFNTTCBp [...] -----END CERTIFICATE----- rgw_frontend_ssl_key: | -----BEGIN PRIVATE KEY----- MIIJRAIBADANBgkqhkiG9w0BAQEFAASCCS4wggkqAgEAAoICAQDLtFwg6LLl2j4Z BDV+iL4AO7VZ9KbmWIt37Ml2W6y2YeKX3Qwf+3eBz7TVHR1dm6iPpCpqpQjXUsT9 [...] -----END PRIVATE KEY-----
Instead of pasting the content of SSL certificate and key files, you can
omit the rgw_frontend_ssl_certificate:
and
rgw_frontend_ssl_key:
keywords and upload them to the
configuration database:
cephuser@adm >
ceph config-key set rgw/cert/REALM_NAME/ZONE_NAME.crt \ -i SSL_CERT_FILEcephuser@adm >
ceph config-key set rgw/cert/REALM_NAME/ZONE_NAME.key \ -i SSL_KEY_FILE
8.3.4.1.1 Configure the Object Gateway to listen on both ports 443 and 80 #
To configure the Object Gateway to listen on both ports 443 (HTTPS) and 80 (HTTP), follow these steps:
The commands in the procedure use realm and zone
default
.
Deploy the Object Gateway by supplying a specification file. Refer to Section 8.3.4, “Deploying Object Gateways” for more details on the Object Gateway specification. Use the following command:
cephuser@adm >
ceph orch apply -i SPEC_FILEIf SSL certificates are not supplied in the specification file, add them by using the following command:
cephuser@adm >
ceph config-key set rgw/cert/default/default.crt -i certificate.pemcephuser@adm >
ceph config-key set rgw/cert/default/default.key -i key.pemChange the default value of the
rgw_frontends
option:cephuser@adm >
ceph config set client.rgw.default.default rgw_frontends \ "beast port=80 ssl_port=443"Restart Object Gateways:
cephuser@adm >
ceph orch restart rgw.default.default
8.3.4.2 Deploying with a subcluster #
Subclusters help you organize the nodes in your clusters to isolate workloads and make elastic scaling easier. If you are deploying with a subcluster, apply the following configuration:
service_type: rgw service_id: REALM_NAME.ZONE_NAME.SUBCLUSTER placement: hosts: - ses-min1 - ses-min2 - ses-min3 spec: rgw_realm: RGW_REALM rgw_zone: RGW_ZONE subcluster: SUBCLUSTER
8.3.5 Deploying iSCSI Gateways #
cephadm deploys an iSCSI Gateway which is a storage area network (SAN) protocol that allows clients (called initiators) to send SCSI commands to SCSI storage devices (targets) on remote servers.
Apply the following configuration to deploy. Ensure
trusted_ip_list
contains the IP addresses of all iSCSI Gateway
and Ceph Manager nodes (see the example output below).
Ensure the pool is created before applying the following specification.
service_type: iscsi service_id: EXAMPLE_ISCSI placement: hosts: - ses-min1 - ses-min2 - ses-min3 spec: pool: EXAMPLE_POOL api_user: EXAMPLE_USER api_password: EXAMPLE_PASSWORD trusted_ip_list: "IP_ADDRESS_1,IP_ADDRESS_2"
Ensure the IPs listed for trusted_ip_list
do
not have a space after the comma separation.
8.3.5.1 Secure SSL configuration #
To use a secure SSL connection between the Ceph Dashboard and the iSCSI
target API, you need a pair of valid SSL certificate and key files. These
can be either CA-issued or self-signed (see
Section 10.1.1, “Creating self-signed certificates”). To enable SSL, include
the api_secure: true
setting in your specification
file:
spec: api_secure: true
To specify the SSL certificate and key, you can paste the content directly
into the YAML specification file. The pipe sign (|
) at
the end of line tells the parser to expect a multi-line string as a value.
For example:
spec: pool: EXAMPLE_POOL api_user: EXAMPLE_USER api_password: EXAMPLE_PASSWORD trusted_ip_list: "IP_ADDRESS_1,IP_ADDRESS_2" api_secure: true ssl_cert: | -----BEGIN CERTIFICATE----- MIIDtTCCAp2gAwIBAgIYMC4xNzc1NDQxNjEzMzc2MjMyXzxvQ7EcMA0GCSqGSIb3 DQEBCwUAMG0xCzAJBgNVBAYTAlVTMQ0wCwYDVQQIDARVdGFoMRcwFQYDVQQHDA5T [...] -----END CERTIFICATE----- ssl_key: | -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC5jdYbjtNTAKW4 /CwQr/7wOiLGzVxChn3mmCIF3DwbL/qvTFTX2d8bDf6LjGwLYloXHscRfxszX/4h [...] -----END PRIVATE KEY-----
8.3.6 Deploying NFS Ganesha #
NFS Ganesha supports NFS version 4.1 and newer. It does not support NFS version 3.
cephadm deploys NFS Ganesha using a pre-defined RADOS pool and an optional name-space. To deploy NFS Ganesha, apply the following specification:
You need to have a pre-defined RADOS pool otherwise the ceph
orch apply
operation will fail. For more information on creating
a pool, see Section 18.1, “Creating a pool”.
service_type: nfs service_id: EXAMPLE_NFS placement: hosts: - ses-min1 - ses-min2 spec: pool: EXAMPLE_POOL namespace: EXAMPLE_NAMESPACE
EXAMPLE_NFS with an arbitrary string that identifies the NFS export.
EXAMPLE_POOL with the name of the pool where the NFS Ganesha RADOS configuration object will be stored.
EXAMPLE_NAMESPACE (optional) with the desired Object Gateway NFS namespace (for example,
ganesha
).
8.3.7 Deploying rbd-mirror
#
The rbd-mirror
service takes care of synchronizing RADOS Block Device images between
two Ceph clusters (for more details, see
Section 20.4, “RBD image mirrors”). To deploy rbd-mirror
, use the
following specification:
service_type: rbd-mirror service_id: EXAMPLE_RBD_MIRROR placement: hosts: - ses-min3
8.3.8 Deploying the monitoring stack #
The monitoring stack consists of Prometheus, Prometheus exporters, Prometheus Alertmanager, and Grafana. Ceph Dashboard makes use of these components to store and visualize detailed metrics on cluster usage and performance.
If your deployment requires custom or locally served container images of the monitoring stack services, refer to Section 16.1, “Configuring custom or local images”.
To deploy the monitoring stack, follow these steps:
Enable the
prometheus
module in the Ceph Manager daemon. This exposes the internal Ceph metrics so that Prometheus can read them:cephuser@adm >
ceph mgr module enable prometheusNoteEnsure this command is run before Prometheus is deployed. If the command was not run before the deployment, you must redeploy Prometheus to update Prometheus' configuration:
cephuser@adm >
ceph orch redeploy prometheusCreate a specification file (for example
monitoring.yaml
) with a content similar to the following:service_type: prometheus placement: hosts: - ses-min2 --- service_type: node-exporter --- service_type: alertmanager placement: hosts: - ses-min4 --- service_type: grafana placement: hosts: - ses-min3
Apply monitoring services by running:
cephuser@adm >
ceph orch apply -i monitoring.yamlIt may take a minute or two for the monitoring services to be deployed.
Prometheus, Grafana, and the Ceph Dashboard are all automatically configured to talk to each other, resulting in a fully functional Grafana integration in the Ceph Dashboard when deployed as described above.
The only exception to this rule is monitoring with RBD images. See Section 16.5.4, “Enabling RBD-image monitoring” for more information.