Vai al contenutoNaviga tra le pagine: pagina precedente [tasto di scelta p]/pagina successiva [tasto di scelta n]
documentation.suse.com / Documentazione di SUSE Enterprise Storage 7 / Deploying and Administering SUSE Enterprise Storage with Rook / Administrating Ceph on SUSE CaaS Platform / Configuration
Si applica a SUSE Enterprise Storage 7

6 Configuration

6.1 Ceph configuration

For almost any Ceph cluster, the user will want—and may need— to change some Ceph configurations. These changes often may be warranted in order to alter performance to meet SLAs, or to update default data resiliency settings.

Avvertimento
Avvertimento

Modify Ceph settings carefully, and review the Ceph configuration documentation before making any changes. Changing the settings could result in unhealthy daemons or even data loss if used incorrectly.

6.1.1 Required configurations

Rook and Ceph both strive to make configuration as easy as possible, but there are some configuration options which users are well advised to consider for any production cluster.

6.1.1.1 Default PG and PGP counts

The number of PGs and PGPs can be configured on a per-pool basis, but it is highly advised to set default values that are appropriate for your Ceph cluster. Appropriate values depend on the number of OSDs the user expects to have backing each pool.

Pools created prior to v1.1 will have a default PG count of 100. Pools created after v1.1 will have Ceph's default PG count.

An easier option exists for Rook-Ceph clusters running Ceph Nautilus (v14.2.x) or newer. Nautilus introduced the PG auto-scaler mgr module capable of automatically managing PG and PGP values for pools.

In Nautilus, this module is not enabled by default, but can be enabled by the following setting in the CephCluster CR:

  spec:
    mgr:
      modules:
      - name: pg_autoscaler
        enabled: true

In Octopus (v15.2.x), this module is enabled by default without the aforementioned setting.

With that setting, the autoscaler will be enabled for all new pools. If you do not desire to have the autoscaler enabled for all new pools, you will need to use the Rook toolbox to enable the module and enable the autoscaling on individual pools.

The autoscaler is not enabled for the existing pools after enabling the module. So if you want to enable the autoscaling for these existing pools, they must be configured from the toolbox.

6.1.2 Specifying configuration options

6.1.2.1 Toolbox and the Ceph CLI

The most recommended way of configuring Ceph is to set Ceph's configuration directly. The first method for doing so is to use Ceph's CLI from the Rook-Ceph toolbox pod. From the toolbox, the user can change Ceph configurations, enable manager modules, create users and pools, and much more.

6.1.2.2 Ceph Dashboard

The Ceph Dashboard is another way of setting some of Ceph’s configuration directly. Configuration by the Ceph Dashboard is recommended with the same priority as configuration via the Ceph CLI (above).

6.1.2.3 Advanced configuration via ceph.conf overrides ConfigMap

Setting configuration options via Ceph’s CLI requires that at least one MON be available for the configuration options to be set, and setting configuration options via dashboard requires at least one mgr to be available. Ceph may also have a small number of very advanced settings that are not able to be modified easily via CLI or dashboard. The least recommended method for configuring Ceph is intended as a last-resort fallback in situations like these.