Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Enterprise Storage 5.5 (SES 5 & SES 5.5)

12 Ceph Cluster Configuration Edit source

This chapter provides a list of important Ceph cluster settings and their description. The settings are sorted by topic.

12.1 Runtime Configuration Edit source

Section 1.12, “Adjusting ceph.conf with Custom Settings” describes how to make changes to the Ceph configuration file ceph.conf. However, the actual cluster behavior is determined not by the current state of the ceph.conf file but by the configuration of the running Ceph daemons, which is stored in memory.

You can query an individual Ceph daemon for a particular configuration setting using the admin socket on the node where the daemon is running. For example, the following command gets the value of the osd_max_write_size configuration parameter from daemon named osd.0:

cephadm > ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok \
config get osd_max_write_size
  "osd_max_write_size": "90"

You can also change the daemons' settings at runtime. Remember that this change is temporary and will be lost after the next daemon restart. For example, the following command changes the osd_max_write_size parameter to '50' for all OSDs in the cluster:

cephadm > ceph tell osd.* injectargs --osd_max_write_size 50
Warning: injectargs is Not Reliable

Unfortunately, changing the cluster settings with the injectargs command is not 100% reliable. If you need to be sure that the changed parameter is active, change it in the configuration files on all cluster nodes and restart all daemons in the cluster.

12.2 Ceph OSD and BlueStore Edit source

12.2.1 Automatic Cache Sizing Edit source

BlueStore can be configured to automatically resize its caches when tc_malloc is configured as the memory allocator and the bluestore_cache_autotune setting is enabled. This option is currently enabled by default. BlueStore will attempt to keep OSD heap memory usage under a designated target size via the osd_memory_target configuration option. This is a best effort algorithm and caches will not shrink smaller than the amount specified by osd_memory_cache_min. Cache ratios will be chosen based on a hierarchy of priorities. If priority information is not available, the bluestore_cache_meta_ratio and bluestore_cache_kv_ratio options are used as fallbacks.


Automatically tunes the ratios assigned to different BlueStore caches while respecting minimum values. Default is True.


When tc_malloc and bluestore_cache_autotune are enabled, try to keep this many bytes mapped in memory.


This may not exactly match the RSS memory usage of the process. While the total amount of heap memory mapped by the process should generally stay close to this target, there is no guarantee that the kernel will actually reclaim memory that has been unmapped.


When tc_malloc and bluestore_cache_autotune are enabled, set the minimum amount of memory used for caches.


Setting this value too low can result in significant cache thrashing.

Print this page