The architecture of the Ceph Manager (refer to Section 1.2.3, “Ceph Nodes and Daemons” for a brief introduction) allows extending its functionality via modules, such as 'dashboard' (see Part II, “Ceph Dashboard”), 'prometheus' (see Chapter 18, Monitoring and Alerting), or 'balancer'.
To list all available modules, run:
cephadm@adm >
ceph mgr module ls
{
"enabled_modules": [
"restful",
"status"
],
"disabled_modules": [
"dashboard"
]
}
To enable or disable a specific module, run:
cephadm@adm >
ceph mgr module enable MODULE-NAME
For example:
cephadm@adm >
ceph mgr module disable dashboard
To list the services that the enabled modules provide, run:
cephadm@adm >
ceph mgr services
{
"dashboard": "http://myserver.com:7789/",
"restful": "https://myserver.com:8789/"
}
The balancer module optimizes the placement group (PG) distribution across OSDs for a more balanced deployment. Although the module is activated by default, it is inactive. It supports the following two modes: 'crush-compat' and 'upmap'.
To view the current balancer configuration, run:
cephadm@adm >
ceph balancer status
We currently only support the 'crush-compat' mode because the 'upmap' mode requires an OSD feature that prevents any pre-Luminous OSDs from connecting to the cluster.
In 'crush-compat' mode, the balancer adjusts the OSDs' reweight-sets to achieve improved distribution of the data. It moves PGs between OSDs, temporarily causing a HEALTH_WARN cluster state resulting from misplaced PGs.
Although 'crush-compat' is the default mode, we recommend activating it explicitly:
cephadm@adm >
ceph balancer mode crush-compat
Using the balancer module, you can create a plan for data balancing. You can then execute the plan manually, or let the balancer balance PGs continuously.
The decision whether to run the balancer in manual or automatic mode depends on several factors, such as the current data imbalance, cluster size, PG count, or I/O activity. We recommend creating an initial plan and executing it at a time of low I/O load in the cluster. The reason for this is that the initial imbalance will probably be considerable and it is a good practice to keep the impact on clients low. After an initial manual run, consider activating the automatic mode and monitor the rebalance traffic under normal I/O load. The improvements in PG distribution need to be weighed against the rebalance traffic caused by the balancer.
During the process of balancing, the balancer module throttles PG movements so that only a configurable fraction of PGs is moved. The default is 5% and you can adjust the fraction, to 9% for example, by running the following command:
cephadm@adm >
ceph config set mgr target_max_misplaced_ratio .09
To create and execute a balancing plan, follow these steps:
Check the current cluster score:
cephadm@adm >
ceph balancer eval
Create a plan. For example, 'great_plan':
cephadm@adm >
ceph balancer optimize great_plan
See what changes the 'great_plan' will entail:
cephadm@adm >
ceph balancer show great_plan
Check the potential cluster score if you decide to apply the 'great_plan':
cephadm@adm >
ceph balancer eval great_plan
Execute the 'great_plan' for one time only:
cephadm@adm >
ceph balancer execute great_plan
Observe the cluster balancing with the ceph -s
command. If you are satisfied with the result, activate automatic
balancing:
cephadm@adm >
ceph balancer on
If you later decide to deactivate automatic balancing, run:
cephadm@adm >
ceph balancer off
You can activate automatic balancing without executing an initial plan. In such case, expect a potentially long running rebalancing of placement groups.
The telemetry plugin sends the Ceph project anonymous data about the cluster in which the plugin is running.
This (opt-in) component contains counters and statistics on how the cluster has been deployed, the version of Ceph, the distribution of the hosts and other parameters which help the project to gain a better understanding of the way Ceph is used. It does not contain any sensitive data like pool names, object names, object contents, or host names.
The purpose of the telemetry module is to provide an automated feedback loop for the developers to help quantify adoption rates, tracking, or point out things that need to be better explained or validated during configuration to prevent undesirable outcomes.
The telemetry module requires the Ceph Manager nodes to have the ability to push data over HTTPS to the upstream servers. Ensure your corporate firewalls permit this action.
To enable the telemetry module:
cephadm@adm >
ceph mgr module enable telemetry
This command only enables you to view your data locally. This command does not share your data with the Ceph community.
To allow the telemetry module to start sharing data:
cephadm@adm >
ceph telemetry on
To disable telemetry data sharing:
cephadm@adm >
ceph telemetry off
To generate a JSON report that can be printed:
cephadm@adm >
ceph telemetry show
To add a contact and description to the report:
cephadm@adm >
ceph config set mgr mgr/telemetry/contact ‘John Doe john.doe@example.com’cephadm@adm >
ceph config set mgr mgr/telemetry/description ‘My first Ceph cluster’
The module compiles and sends a new report every 24 hours by default. To adjust this interval:
cephadm@adm >
ceph config set mgr mgr/telemetry/interval HOURS