Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Enterprise Storage 7 Documentation / Administration and Operations Guide / Configuring a Cluster / Ceph Manager modules
Applies to SUSE Enterprise Storage 7

29 Ceph Manager modules

The architecture of the Ceph Manager (refer to Section 1.2.3, “Ceph nodes and daemons” for a brief introduction) allows extending its functionality via modules, such as 'dashboard' (see Part I, “Ceph Dashboard”), 'prometheus' (see Chapter 16, Monitoring and alerting), or 'balancer'.

To list all available modules, run:

cephuser@adm > ceph mgr module ls
{
        "enabled_modules": [
                "restful",
                "status"
        ],
        "disabled_modules": [
                "dashboard"
        ]
}

To enable or disable a specific module, run:

cephuser@adm > ceph mgr module enable MODULE-NAME

For example:

cephuser@adm > ceph mgr module disable dashboard

To list the services that the enabled modules provide, run:

cephuser@adm > ceph mgr services
{
        "dashboard": "http://myserver.com:7789/",
        "restful": "https://myserver.com:8789/"
}

29.1 Balancer

The balancer module optimizes the placement group (PG) distribution across OSDs for a more balanced deployment. Although the module is activated by default, it is inactive. It supports the following two modes: crush-compat and upmap.

Tip
Tip: Current Balancer Status and Configuration

To view the current balancer status and configuration information, run:

cephuser@adm > ceph balancer status

29.1.1 The 'crush-compat' mode

In 'crush-compat' mode, the balancer adjusts the OSDs' reweight-sets to achieve improved distribution of the data. It moves PGs between OSDs, temporarily causing a HEALTH_WARN cluster state resulting from misplaced PGs.

Tip
Tip: Mode Activation

Although 'crush-compat' is the default mode, we recommend activating it explicitly:

cephuser@adm > ceph balancer mode crush-compat

29.1.2 Planning and executing of data balancing

Using the balancer module, you can create a plan for data balancing. You can then execute the plan manually, or let the balancer balance PGs continuously.

The decision whether to run the balancer in manual or automatic mode depends on several factors, such as the current data imbalance, cluster size, PG count, or I/O activity. We recommend creating an initial plan and executing it at a time of low I/O load in the cluster. The reason for this is that the initial imbalance will probably be considerable and it is a good practice to keep the impact on clients low. After an initial manual run, consider activating the automatic mode and monitor the rebalance traffic under normal I/O load. The improvements in PG distribution need to be weighed against the rebalance traffic caused by the balancer.

Tip
Tip: Movable Fraction of Placement Groups (PGs)

During the process of balancing, the balancer module throttles PG movements so that only a configurable fraction of PGs is moved. The default is 5% and you can adjust the fraction, to 9% for example, by running the following command:

cephuser@adm > ceph config set mgr target_max_misplaced_ratio .09

To create and execute a balancing plan, follow these steps:

  1. Check the current cluster score:

    cephuser@adm > ceph balancer eval
  2. Create a plan. For example, 'great_plan':

    cephuser@adm > ceph balancer optimize great_plan
  3. See what changes the 'great_plan' will entail:

    cephuser@adm > ceph balancer show great_plan
  4. Check the potential cluster score if you decide to apply the 'great_plan':

    cephuser@adm > ceph balancer eval great_plan
  5. Execute the 'great_plan' for one time only:

    cephuser@adm > ceph balancer execute great_plan
  6. Observe the cluster balancing with the ceph -s command. If you are satisfied with the result, activate automatic balancing:

    cephuser@adm > ceph balancer on

    If you later decide to deactivate automatic balancing, run:

    cephuser@adm > ceph balancer off
Tip
Tip: Automatic Balancing without Initial Plan

You can activate automatic balancing without executing an initial plan. In such case, expect a potentially long running rebalancing of placement groups.

29.2 Enabling the telemetry module

The telemetry plugin sends the Ceph project anonymous data about the cluster in which the plugin is running.

This (opt-in) component contains counters and statistics on how the cluster has been deployed, the version of Ceph, the distribution of the hosts and other parameters which help the project to gain a better understanding of the way Ceph is used. It does not contain any sensitive data like pool names, object names, object contents, or host names.

The purpose of the telemetry module is to provide an automated feedback loop for the developers to help quantify adoption rates, tracking, or point out things that need to be better explained or validated during configuration to prevent undesirable outcomes.

Note
Note

The telemetry module requires the Ceph Manager nodes to have the ability to push data over HTTPS to the upstream servers. Ensure your corporate firewalls permit this action.

  1. To enable the telemetry module:

    cephuser@adm > ceph mgr module enable telemetry
    Note
    Note

    This command only enables you to view your data locally. This command does not share your data with the Ceph community.

  2. To allow the telemetry module to start sharing data:

    cephuser@adm > ceph telemetry on
  3. To disable telemetry data sharing:

    cephuser@adm > ceph telemetry off
  4. To generate a JSON report that can be printed:

    cephuser@adm > ceph telemetry show
  5. To add a contact and description to the report:

    cephuser@adm > ceph config set mgr mgr/telemetry/contact John Doe john.doe@example.com
    cephuser@adm > ceph config set mgr mgr/telemetry/description 'My first Ceph cluster'
  6. The module compiles and sends a new report every 24 hours by default. To adjust this interval:

    cephuser@adm > ceph config set mgr mgr/telemetry/interval HOURS