7 Deploying the bootstrap cluster using ceph-salt
#
This section guides you through the process of deploying a basic Ceph cluster. Read the following subsections carefully and execute the included commands in the given order.
7.1 Installing ceph-salt
#
ceph-salt
provides tools for deploying Ceph clusters managed by
cephadm. ceph-salt
uses the Salt infrastructure to perform OS
management—for example, software updates or time
synchronization—and defining roles for Salt Minions.
On the Salt Master, install the ceph-salt package:
root@master #
zypper install ceph-salt
The above command installed ceph-salt-formula as a
dependency which modified the Salt Master configuration by inserting
additional files in the /etc/salt/master.d
directory.
To apply the changes, restart
salt-master.service
and synchronize
Salt modules:
root@master #
systemctl restart salt-master.serviceroot@master #
salt \* saltutil.sync_all
7.2 Configuring cluster properties #
Use the ceph-salt config
command to configure the basic
properties of the cluster.
The /etc/ceph/ceph.conf
file is managed by cephadm
and users should not edit it. Ceph configuration
parameters should be set using the new ceph config
command. See Section 28.2, “Configuration database” for more
information.
7.2.1 Using the ceph-salt
shell #
If you run ceph-salt config
without any path or
subcommand, you will enter an interactive ceph-salt
shell. The shell is
convenient if you need to configure multiple properties in one batch and do
not want type the full command syntax.
root@master #
ceph-salt config/>
ls o- / ............................................................... [...] o- ceph_cluster .................................................. [...] | o- minions .............................................. [no minions] | o- roles ....................................................... [...] | o- admin .............................................. [no minions] | o- bootstrap ........................................... [no minion] | o- cephadm ............................................ [no minions] | o- tuned ..................................................... [...] | o- latency .......................................... [no minions] | o- throughput ....................................... [no minions] o- cephadm_bootstrap ............................................. [...] | o- advanced .................................................... [...] | o- ceph_conf ................................................... [...] | o- ceph_image_path .................................. [ no image path] | o- dashboard ................................................... [...] | | o- force_password_update ................................. [enabled] | | o- password ................................................ [admin] | | o- ssl_certificate ....................................... [not set] | | o- ssl_certificate_key ................................... [not set] | | o- username ................................................ [admin] | o- mon_ip .................................................. [not set] o- containers .................................................... [...] | o- registries_conf ......................................... [enabled] | | o- registries .............................................. [empty] | o- registry_auth ............................................... [...] | o- password .............................................. [not set] | o- registry .............................................. [not set] | o- username .............................................. [not set] o- ssh ............................................... [no key pair set] | o- private_key .................................. [no private key set] | o- public_key .................................... [no public key set] o- time_server ........................... [enabled, no server host set] o- external_servers .......................................... [empty] o- servers ................................................... [empty] o- subnet .................................................. [not set]
As you can see from the output of ceph-salt
's ls
command, the cluster configuration is organized in a tree structure. To
configure a specific property of the cluster in the ceph-salt
shell, you
have two options:
Run the command from the current position and enter the absolute path to the property as the first argument:
/>
/cephadm_bootstrap/dashboard ls o- dashboard ....................................................... [...] o- force_password_update ..................................... [enabled] o- password .................................................... [admin] o- ssl_certificate ........................................... [not set] o- ssl_certificate_key ....................................... [not set] o- username .................................................... [admin]/> /cephadm_bootstrap/dashboard/username set ceph-admin
Value set.Change to the path whose property you need to configure and run the command:
/>
cd /cephadm_bootstrap/dashboard//ceph_cluster/minions>
ls o- dashboard ....................................................... [...] o- force_password_update ..................................... [enabled] o- password .................................................... [admin] o- ssl_certificate ........................................... [not set] o- ssl_certificate_key ....................................... [not set] o- username ................................................[ceph-admin]
While in a ceph-salt
shell, you can use the autocompletion feature
similar to a normal Linux shell (Bash) autocompletion. It completes
configuration paths, subcommands, or Salt Minion names. When autocompleting
a configuration path, you have two options:
To let the shell finish a path relative to your current position,press the TAB key →| twice.
To let the shell finish an absolute path, enter / and press the TAB key →| twice.
If you enter cd
from the ceph-salt
shell without any
path, the command will print a tree structure of the cluster configuration
with the line of the current path active. You can use the up and down
cursor keys to navigate through individual lines. After you confirm with
Enter, the configuration path will change to
the last active one.
To keep the documentation consistent, we will use a single command syntax
without entering the ceph-salt
shell. For example, you can list the
cluster configuration tree by using the following command:
root@master #
ceph-salt config ls
7.2.2 Adding Salt Minions #
Include all or a subset of Salt Minions that we deployed and accepted in
Chapter 6, Deploying Salt to the Ceph cluster configuration. You can
either specify the Salt Minions by their full names, or use a glob
expressions '*' and '?' to include multiple Salt Minions at once. Use the
add
subcommand under the
/ceph_cluster/minions
path. The following command
includes all accepted Salt Minions:
root@master #
ceph-salt config /ceph_cluster/minions add '*'
Verify that the specified Salt Minions were added:
root@master #
ceph-salt config /ceph_cluster/minions ls
o- minions ................................................. [Minions: 5]
o- ses-master.example.com .................................. [no roles]
o- ses-min1.example.com .................................... [no roles]
o- ses-min2.example.com .................................... [no roles]
o- ses-min3.example.com .................................... [no roles]
o- ses-min4.example.com .................................... [no roles]
7.2.3 Specifying Salt Minions managed by cephadm #
Specify which nodes will belong to the Ceph cluster and will be managed by cephadm. Include all nodes that will run Ceph services as well as the Admin Node:
root@master #
ceph-salt config /ceph_cluster/roles/cephadm add '*'
7.2.4 Specifying Admin Node #
The Admin Node is the node where the ceph.conf
configuration file and the Ceph admin keyring is installed. You usually
run Ceph related commands on the Admin Node.
In a homogeneous environment where all or most hosts belong to SUSE Enterprise Storage, we recommend having the Admin Node on the same host as the Salt Master.
In a heterogeneous environment where one Salt infrastructure hosts more than one cluster, for example, SUSE Enterprise Storage together with SUSE Manager, do not place the Admin Node on the same host as Salt Master.
To specify the Admin Node, run the following command:
root@master #
ceph-salt config /ceph_cluster/roles/admin add ses-master.example.com 1 minion added.root@master #
ceph-salt config /ceph_cluster/roles/admin ls o- admin ................................................... [Minions: 1] o- ses-master.example.com ...................... [Other roles: cephadm]
ceph.conf
and the admin keyring on multiple nodesYou can install the Ceph configuration file and admin keyring on multiple nodes if your deployment requires it. For security reasons, avoid installing them on all the cluster's nodes.
7.2.5 Specifying first MON/MGR node #
You need to specify which of the cluster's Salt Minions will bootstrap the cluster. This minion will become the first one running Ceph Monitor and Ceph Manager services.
root@master #
ceph-salt config /ceph_cluster/roles/bootstrap set ses-min1.example.com Value set.root@master #
ceph-salt config /ceph_cluster/roles/bootstrap ls o- bootstrap ..................................... [ses-min1.example.com]
Additionally, you need to specify the bootstrap MON's IP address on the
public network to ensure that the public_network
parameter
is set correctly, for example:
root@master #
ceph-salt config /cephadm_bootstrap/mon_ip set 192.168.10.20
7.2.6 Specifying tuned profiles #
You need to specify which of the cluster's minions have actively tuned profiles. To do so, add these roles explicitly with the following commands:
One minion cannot have both the latency
and
throughput
roles.
root@master #
ceph-salt config /ceph_cluster/roles/tuned/latency add ses-min1.example.com Adding ses-min1.example.com... 1 minion added.root@master #
ceph-salt config /ceph_cluster/roles/tuned/throughput add ses-min2.example.com Adding ses-min2.example.com... 1 minion added.
7.2.7 Generating an SSH key pair #
cephadm uses the SSH protocol to communicate with cluster nodes. A user
account named cephadm
is automatically created and used
for SSH communication.
You need to generate the private and public part of the SSH key pair:
root@master #
ceph-salt config /ssh generate Key pair generated.root@master #
ceph-salt config /ssh ls o- ssh .................................................. [Key Pair set] o- private_key ..... [53:b1:eb:65:d2:3a:ff:51:6c:e2:1b:ca:84:8e:0e:83] o- public_key ...... [53:b1:eb:65:d2:3a:ff:51:6c:e2:1b:ca:84:8e:0e:83]
7.2.8 Configuring the time server #
All cluster nodes need to have their time synchronized with a reliable time source. There are several scenarios to approach time synchronization:
If all cluster nodes are already configured to synchronize their time using an NTP service of choice, disable time server handling completely:
root@master #
ceph-salt config /time_server disableIf your site already has a single source of time, specify the host name of the time source:
root@master #
ceph-salt config /time_server/servers add time-server.example.comAlternatively,
ceph-salt
has the ability to configure one of the Salt Minion to serve as the time server for the rest of the cluster. This is sometimes referred to as an "internal time server". In this scenario,ceph-salt
will configure the internal time server (which should be one of the Salt Minion) to synchronize its time with an external time server, such aspool.ntp.org
, and configure all the other minions to get their time from the internal time server. This can be achieved as follows:root@master #
ceph-salt config /time_server/servers add ses-master.example.comroot@master #
ceph-salt config /time_server/external_servers add pool.ntp.orgThe
/time_server/subnet
option specifies the subnet from which NTP clients are allowed to access the NTP server. It is automatically set when you specify/time_server/servers
. If you need to change it or specify it manually, run:root@master #
ceph-salt config /time_server/subnet set 10.20.6.0/24
Check the time server settings:
root@master #
ceph-salt config /time_server ls
o- time_server ................................................ [enabled]
o- external_servers ............................................... [1]
| o- pool.ntp.org ............................................... [...]
o- servers ........................................................ [1]
| o- ses-master.example.com ..................................... [...]
o- subnet .............................................. [10.20.6.0/24]
Find more information on setting up time synchronization in https://documentation.suse.com/sles/15-SP2/html/SLES-all/cha-ntp.html#sec-ntp-yast.
7.2.9 Configuring the Ceph Dashboard login credentials #
Ceph Dashboard will be available after the basic cluster is deployed. To access it, you need to set a valid user name and password, for example:
root@master #
ceph-salt config /cephadm_bootstrap/dashboard/username set adminroot@master #
ceph-salt config /cephadm_bootstrap/dashboard/password set PWD
By default, the first dashboard user will be forced to change their password on first login to the dashboard. To disable this feature, run the following command:
root@master #
ceph-salt config /cephadm_bootstrap/dashboard/force_password_update disable
7.2.10 Using the container registry #
The Ceph cluster needs to have access to a container registry so that it can download and deploy containerized Ceph services. There are two ways to access the registry:
If your cluster can access the default registry at
registry.suse.com
(directly or via proxy), you can pointceph-salt
directly to this URL without creating a local registry. Continue by following the steps in Section 7.2.10.2, “Configuring the path to container images”.If your cluster cannot access the default registry—for example, for an air-gapped deployment—you need to configure a local container registry. After the local registry is created and configured, you need to point
ceph-salt
to it.
7.2.10.1 Creating and configuring the local registry (optional) #
There are numerous methods of creating a local registry. The instructions in this section are examples of creating secure and insecure registries. For general information on running a container image registry, refer to https://documentation.suse.com/sles/15-SP2/html/SLES-all/cha-registry-installation.html#sec-docker-registry-installation.
Deploy the registry on a machine accessible by all nodes in the cluster. We recommend the Admin Node. By default, the registry listens on port 5000.
On the registry node, use the following command to ensure that the port is free:
ss -tulpn | grep :5000
If other processes (such as iscsi-tcmu
) are already
listening on port 5000, determine another free port which can be used to
map to port 5000 in the registry container.
Verify that the Containers Module extension is enabled:
>
SUSEConnect --list-extensions | grep -A2 "Containers Module" Containers Module 15 SP2 x86_64 (Activated)Verify that the following packages are installed: apache2-utils (if enabling a secure registry), cni, cni-plugins, podman, podman-cni-config, and skopeo.
Gather the following information:
Fully qualified domain name of the registry host (
REG_HOST_FQDN
).An available port number used to map to the registry container port of 5000 (
REG_HOST_PORT
).Whether the registry will be secure or insecure (
insecure=[true|false]
).
To start an insecure registry (without SSL encryption), follow these steps:
Configure
ceph-salt
for the insecure registry:cephuser@adm >
ceph-salt config containers/registries_conf enablecephuser@adm >
ceph-salt config containers/registries_conf/registries \ add prefix=REG_HOST_FQDN
insecure=true \ location=REG_HOST_PORT
:5000Start the insecure registry by creating the necessary directory (for example,
/var/lib/registry
) and starting the registry with thepodman
command:#
mkdir -p /var/lib/registry#
podman run --privileged -d --name registry \ -pREG_HOST_PORT
:5000 -v /var/lib/registry:/var/lib/registry \ --restart=always registry:2To have the registry start after a reboot, create a
systemd
unit file for it and enable it:>
sudo
podman generate systemd --files --name registry>
sudo
mv container-registry.service /etc/systemd/system/>
sudo
systemctl enable container-registry.service
To start a secure registry, follow these steps:
Create the necessary directories:
#
mkdir -p /var/lib/registry/{auth,certs}Generate an SSL certificate:
#
openssl req -newkey rsa:4096 -nodes -sha256 \ -keyout /var/lib/registry/certs/domain.key -x509 -days 365 \ -out /var/lib/registry/certs/domain.crtNoteSet the
CN=[value]
value to the fully qualified domain name of the host ([REG_HOST_FQDN
]).Copy the certificate to all cluster nodes and refresh the certificate cache:
#
salt-cp '*' /var/lib/registry/certs/domain.crt \ /etc/pki/trust/anchors/#
salt '*' cmd.shell "update-ca-certificates"Generate a username and password combination for authentication to the registry:
#
htpasswd2 -bBc /var/lib/registry/auth/htpasswd \REG_USERNAME
REG_PASSWORD
Start the secure registry. Use the
REGISTRY_STORAGE_DELETE_ENABLED=true
flag so that you can delete images afterwards with theskopeo delete
command.podman run --name myregistry -p
REG_HOST_PORT
:5000 \ -v /var/lib/registry:/var/lib/registry \ -v /var/lib/registry/auth:/auth:z \ -e "REGISTRY_AUTH=htpasswd" \ -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \ -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \ -v /var/lib/registry/certs:/certs:z \ -e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt" \ -e "REGISTRY_HTTP_TLS_KEY=/certs/domain.key" \ -e REGISTRY_STORAGE_DELETE_ENABLED=true \ -e REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true -d registry:2Test secure access to the registry:
>
curl https://REG_HOST_FQDN
:REG_HOST_PORT
/v2/_catalog \ -uREG_USERNAME
:REG_PASSWORD
When the local registry is created, you need to synchronize container images from the official SUSE registry at
registry.suse.com
to the local one. You can use theskopeo sync
command found in the skopeo package for that purpose. For more details, refer to the manual page (man 1 skopeo-sync
). Consider the following examples:Example 7.1: Viewing manifest files #skopeo inspect docker://registry.suse.com/ses/7/ceph/ceph | jq .RepoTags skopeo inspect docker://registry.suse.com/ses/7/ceph/grafana | jq .RepoTags skopeo inspect docker://registry.suse.com/ses/7/ceph/prometheus-server:2.27.1 | jq .RepoTags skopeo inspect docker://registry.suse.com/ses/7/ceph/prometheus-node-exporter:1.1.2 | jq .RepoTags skopeo inspect docker://registry.suse.com/ses/7/ceph/prometheus-alertmanager:0.21.0 | jq .RepoTags
Example 7.2: Synchronize to a directory #Synchronize all Ceph images:
skopeo sync --src docker --dest dir registry.suse.com/ses/7/ceph/ceph /root/images/
Synchronize just the latest images:
skopeo sync --src docker --dest dir registry.suse.com/ses/7/ceph/ceph:latest /root/images/
Example 7.3: Synchronize Grafana images: #skopeo sync --src docker --dest dir registry.suse.com/ses/7/ceph/grafana /root/images/
Synchronize the latest Grafana images only:
skopeo sync --src docker --dest dir registry.suse.com/ses/7/ceph/grafana:latest /root/images/
Example 7.4: Synchronize latest Prometheus images #skopeo sync --src docker --dest dir registry.suse.com/ses/7/ceph/prometheus-server:2.27.1 /root/images/ skopeo sync --src docker --dest dir registry.suse.com/ses/7/ceph/prometheus-node-exporter:1.1.2 /root/images/ skopeo sync --src docker --dest dir registry.suse.com/ses/7/ceph/prometheus-alertmanager:0.21.0 /root/images/
Configure the URL of the local registry:
cephuser@adm >
ceph-salt config /containers/registry_auth/registry set REG_HOST_URLConfigure the user name and password to access the local registry:
cephuser@adm >
ceph-salt config /containers/registry_auth/username set REG_USERNAMEcephuser@adm >
ceph-salt config /containers/registry_auth/password set REG_PASSWORD
To avoid re-syncing the local registry when new updated containers appear, you can configure a registry cache.
7.2.10.2 Configuring the path to container images #
This section helps you configure the path to container images of the bootstrap cluster (deployment of the first Ceph Monitor and Ceph Manager pair). The path does not apply to container images of additional services, for example the monitoring stack.
If you need to use a proxy to communicate with the container registry server, perform the following configuration steps on all cluster nodes:
Copy the configuration file for containers:
>
sudo
cp /usr/share/containers/containers.conf /etc/containers/containers.confEdit the newly-copied file and add the
http_proxy
setting to its[engine]
section, for example:>
cat /etc/containers/containers.conf [engine] http_proxy=proxy.example.com [...]
cephadm needs to know a valid URI path to container images. Verify the default setting by executing
root@master #
ceph-salt config /cephadm_bootstrap/ceph_image_path ls
If you do not need an alternative or local registry, specify the default SUSE container registry:
root@master #
ceph-salt config /cephadm_bootstrap/ceph_image_path set registry.suse.com/ses/7/ceph/ceph
If your deployment requires a specific path, for example, a path to a local registry, configure it as follows:
root@master #
ceph-salt config /cephadm_bootstrap/ceph_image_path set LOCAL_REGISTRY_PATH
7.2.11 Enabling data in-flight encryption (msgr2) #
The Messenger v2 protocol (MSGR2) is Ceph's on-wire protocol. It provides a security mode that encrypts all data passing over the network, encapsulation of authentication payloads, and the enabling of future integration of new authentication modes (such as Kerberos).
msgr2 is not currently supported by Linux kernel Ceph clients, such as CephFS and RADOS Block Device.
Ceph daemons can bind to multiple ports, allowing both legacy Ceph clients and new v2-capable clients to connect to the same cluster. By default, MONs now bind to the new IANA-assigned port 3300 (CE4h or 0xCE4) for the new v2 protocol, while also binding to the old default port 6789 for the legacy v1 protocol.
The v2 protocol (MSGR2) supports two connection modes:
- crc mode
A strong initial authentication when the connection is established and a CRC32C integrity check.
- secure mode
A strong initial authentication when the connection is established and full encryption of all post-authentication traffic, including a cryptographic integrity check.
For most connections, there are options that control which modes are used:
- ms_cluster_mode
The connection mode (or permitted modes) used for intra-cluster communication between Ceph daemons. If multiple modes are listed, the modes listed first are preferred.
- ms_service_mode
A list of permitted modes for clients to use when connecting to the cluster.
- ms_client_mode
A list of connection modes, in order of preference, for clients to use (or allow) when talking to a Ceph cluster.
There are a parallel set of options that apply specifically to monitors, allowing administrators to set different (usually more secure) requirements on communication with the monitors.
- ms_mon_cluster_mode
The connection mode (or permitted modes) to use between monitors.
- ms_mon_service_mode
A list of permitted modes for clients or other Ceph daemons to use when connecting to monitors.
- ms_mon_client_mode
A list of connection modes, in order of preference, for clients or non-monitor daemons to use when connecting to monitors.
In order to enable MSGR2 encryption mode during the deployment, you need to
add some configuration options to the ceph-salt
configuration before
running ceph-salt apply
.
To use secure
mode, run the following commands.
Add the global section to ceph_conf
in the ceph-salt
configuration tool:
root@master #
ceph-salt config /cephadm_bootstrap/ceph_conf add global
Set the following options:
root@master #
ceph-salt config /cephadm_bootstrap/ceph_conf/global set ms_cluster_mode "secure crc"root@master #
ceph-salt config /cephadm_bootstrap/ceph_conf/global set ms_service_mode "secure crc"root@master #
ceph-salt config /cephadm_bootstrap/ceph_conf/global set ms_client_mode "secure crc"
Ensure secure
precedes crc
.
To force secure
mode, run the
following commands:
root@master #
ceph-salt config /cephadm_bootstrap/ceph_conf/global set ms_cluster_mode secureroot@master #
ceph-salt config /cephadm_bootstrap/ceph_conf/global set ms_service_mode secureroot@master #
ceph-salt config /cephadm_bootstrap/ceph_conf/global set ms_client_mode secure
If you want to change any of the above settings, set the configuration
changes in the monitor configuration store. This is achieved using the
ceph config set
command.
root@master #
ceph config set global CONNECTION_OPTION CONNECTION_MODE [--force]
For example:
root@master #
ceph config set global ms_cluster_mode "secure crc"
If you want to check the current value, including default value, run the following command:
root@master #
ceph config get CEPH_COMPONENT CONNECTION_OPTION
For example, to get the ms_cluster_mode
for OSD's, run:
root@master #
ceph config get osd ms_cluster_mode
7.2.12 Configuring the cluster network #
Optionally, if you are running a separate cluster network, you may need to
set the cluster network IP address followed by the subnet mask part after
the slash sign, for example 192.168.10.22/24
.
Run the following commands to enable cluster_network
:
root@master #
ceph-salt config /cephadm_bootstrap/ceph_conf add globalroot@master #
ceph-salt config /cephadm_bootstrap/ceph_conf/global set cluster_network NETWORK_ADDR
7.2.13 Verifying the cluster configuration #
The minimal cluster configuration is finished. Inspect it for obvious errors:
root@master #
ceph-salt config ls
o- / ............................................................... [...]
o- ceph_cluster .................................................. [...]
| o- minions .............................................. [Minions: 5]
| | o- ses-master.example.com .................................. [admin]
| | o- ses-min1.example.com ......................... [bootstrap, admin]
| | o- ses-min2.example.com ................................. [no roles]
| | o- ses-min3.example.com ................................. [no roles]
| | o- ses-min4.example.com ................................. [no roles]
| o- roles ....................................................... [...]
| o- admin .............................................. [Minions: 2]
| | o- ses-master.example.com ....................... [no other roles]
| | o- ses-min1.example.com ................. [other roles: bootstrap]
| o- bootstrap ................................ [ses-min1.example.com]
| o- cephadm ............................................ [Minions: 5]
| o- tuned ..................................................... [...]
| o- latency .......................................... [no minions]
| o- throughput ....................................... [no minions]
o- cephadm_bootstrap ............................................. [...]
| o- advanced .................................................... [...]
| o- ceph_conf ................................................... [...]
| o- ceph_image_path ............... [registry.suse.com/ses/7/ceph/ceph]
| o- dashboard ................................................... [...]
| o- force_password_update ................................. [enabled]
| o- password ................................... [randomly generated]
| o- username ................................................ [admin]
| o- mon_ip ............................................ [192.168.10.20]
o- containers .................................................... [...]
| o- registries_conf ......................................... [enabled]
| | o- registries .............................................. [empty]
| o- registry_auth ............................................... [...]
| o- password .............................................. [not set]
| o- registry .............................................. [not set]
| o- username .............................................. [not set]
o- ssh .................................................. [Key Pair set]
| o- private_key ..... [53:b1:eb:65:d2:3a:ff:51:6c:e2:1b:ca:84:8e:0e:83]
| o- public_key ...... [53:b1:eb:65:d2:3a:ff:51:6c:e2:1b:ca:84:8e:0e:83]
o- time_server ............................................... [enabled]
o- external_servers .............................................. [1]
| o- 0.pt.pool.ntp.org ......................................... [...]
o- servers ....................................................... [1]
| o- ses-master.example.com .................................... [...]
o- subnet ............................................. [10.20.6.0/24]
You can check if the configuration of the cluster is valid by running the following command:
root@master #
ceph-salt status
cluster: 5 minions, 0 hosts managed by cephadm
config: OK
7.2.14 Exporting cluster configurations #
After you have configured the basic cluster and its configuration is valid, it is a good idea to export its configuration to a file:
root@master #
ceph-salt export > cluster.json
The output of the ceph-salt export
includes the SSH
private key. If you are concerned about the security implications, do not
execute this command without taking appropriate precautions.
In case you break the cluster configuration and need to revert to a backup state, run:
root@master #
ceph-salt import cluster.json
7.3 Updating nodes and bootstrap minimal cluster #
Before you deploy the cluster, update all software packages on all nodes:
root@master #
ceph-salt update
If a node reports Reboot is needed
during the update,
important OS packages—such as the kernel—were updated to a newer
version and you need to reboot the node to apply the changes.
To reboot all nodes that require rebooting, either append the
--reboot
option
root@master #
ceph-salt update --reboot
Or, reboot them in a separate step:
root@master #
ceph-salt reboot
The Salt Master is never rebooted by ceph-salt update
--reboot
or ceph-salt reboot
commands. If the
Salt Master needs rebooting, you need to reboot it manually.
After the nodes are updated, bootstrap the minimal cluster:
root@master #
ceph-salt apply
When bootstrapping is complete, the cluster will have one Ceph Monitor and one Ceph Manager.
The above command will open an interactive user interface that shows the current progress of each minion.
If you need to apply the configuration from a script, there is also a non-interactive mode of deployment. This is also useful when deploying the cluster from a remote machine because constant updating of the progress information on the screen over the network may become distracting:
root@master #
ceph-salt apply --non-interactive
7.4 Reviewing final steps #
After the ceph-salt apply
command has completed, you
should have one Ceph Monitor and one Ceph Manager. You should be able to run the
ceph status
command successfully on any of the minions
that were given the admin
role as root
or the cephadm
user using sudo
.
The next steps involve using the cephadm to deploy additional Ceph Monitor, Ceph Manager, OSDs, the Monitoring Stack, and Gateways.
Before you continue, review your new cluster's network settings. At this
point, the public_network
setting has been populated
based on what was entered for /cephadm_bootstrap/mon_ip
in the ceph-salt
configuration. However, this setting was
only applied to Ceph Monitor. You can review this setting with the following
command:
root@master #
ceph config get mon public_network
This is the minimum that Ceph requires to work, but we recommend making
this public_network
setting global
,
which means it will apply to all types of Ceph daemons, and not only to
MONs:
root@master #
ceph config set global public_network "$(ceph config get mon public_network)"
This step is not required. However, if you do not use this setting, the Ceph OSDs and other daemons (except Ceph Monitor) will listen on all addresses.
If you want your OSDs to communicate amongst themselves using a completely separate network, run the following command:
root@master #
ceph config set global cluster_network "cluster_network_in_cidr_notation"
Executing this command will ensure that the OSDs created in your deployment will use the intended cluster network from the start.
If your cluster is set to have dense nodes (greater than 62 OSDs per host),
make sure to assign sufficient ports for Ceph OSDs. The default range
(6800-7300) currently allows for no more than 62 OSDs per host. For a
cluster with dense nodes, adjust the setting
ms_bind_port_max
to a suitable value. Each OSD will
consume eight additional ports. For example, given a host that is set to run
96 OSDs, 768 ports will be needed. ms_bind_port_max
should be set at least to 7568 by running the following command:
root@master #
ceph config set osd.* ms_bind_port_max 7568
You will need to adjust your firewall settings accordingly for this to work. See Section 13.7, “Firewall settings for Ceph” for more information.
7.5 Disable insecure clients #
Since Octopus v15.2.11, a new health warning was introduced that informs
you that insecure clients are allowed to join the cluster. This warning is
on by default. The Ceph Dashboard will show the cluster in
the HEALTH_WARN
status and verifying the cluster status
on the command line informs you as follows:
cephuser@adm >
ceph status
cluster:
id: 3fe8b35a-689f-4970-819d-0e6b11f6707c
health: HEALTH_WARN
mons are allowing insecure global_id reclaim
[...]
This warning means that the Ceph Monitors are still allowing old, unpatched clients to connect to the cluster. This ensures existing clients can still connect while the cluster is being upgraded, but warns you that there is a problem that needs to be addressed. When the cluster and all clients are upgraded to the latest version of Ceph, disallow unpatched clients by running the following command:
cephuser@adm >
ceph config set mon auth_allow_insecure_global_id_reclaim false