Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Enterprise Storage 6

8 Improving Performance with LVM cache Edit source

Warning
Warning: Technology Preview

LVM cache is currently a technology preview.

LVM cache is a caching mechanism used to improve the performance of a logical volume (LV). Typically, a smaller and faster device is used to improve I/O performance of a larger and slower LV. Refer to its manual page (man 7 lvmcache) to find more details about LVM cache.

In SUSE Enterprise Storage, LVM cache can improve the performance of OSDs. Support for LVM cache is provided via a ceph-volume plugin. You can find detailed information about its usage by running ceph-volume lvmcache.

8.1 Prerequisites Edit source

To use LVM cache features to improve the performance of a Ceph cluster, you need to have:

  • A running Ceph cluster in a stable state ('HEALTH_OK').

  • OSDs deployed with BlueStore and LVM. This is the default if the OSDs were deployed using SUSE Enterprise Storage 6 or later.

  • Empty disks or partitions that will be used for caching.

8.2 Points to Consider Edit source

Consider the following points before configuring your OSDs to use LVM cache:

  • Verify that LVM cache is suitable for your use case. If you have only a few fast drives available that are not used for OSDs, the general recommendation is to use them as WAL/DB devices for the OSDs. In such a case, WAL and DB operations (small and rare operations) are applied on the fast drive while data operations are applied on the slower OSD drive.

    Tip
    Tip

    If latency is more important for your deployment than IOPS or throughput, you can use the fast drives as LVM cache rather than WAL/DB partitions.

  • If you plan to use a fast drive as an LVM cache for multiple OSDs, be aware that all OSD operations (including replication) will go through the caching device. All reads will be queried from the caching device, and are only served from the slow device in case of a cache miss. Writes are always applied to the caching device first, and are flushed to the slow device at a later time ('writeback' is the default caching mode).

    When deciding whether to utilize an LVM cache, verify whether the fast drive can serve as a front for multiple OSDs while still providing an acceptable amount of IOPS. You can test it by measuring the maximum amount of IOPS that the fast device can serve, and then dividing the result by the number of OSDs behind the fast device. If the result is lower or close to the maximum amount of IOPS that the OSD can provide without the cache, LVM cache is probably not suited for this setup.

  • The interaction of the LVM cache device with OSDs is important. Writes are periodically flushed from the caching device to the slow device. If the incoming traffic is sustained and significant, the caching device will struggle to keep up with incoming requests as well as the flushing process, resulting in performance drop. Unless the fast device can provide much more IOPS with better latency than the slow device, do not use LVM cache with a sustained high volume workload. Traffic in a burst pattern is more suited for LVM cache as it gives the cache time to flush its dirty data without interfering with client traffic. For a sustained low traffic workload, it is difficult to guess in advance whether using LVM cache will improve performance. The best test is to benchmark and compare the LVM cache setup against the WAL/DB setup. Moreover, as small writes are heavy on the WAL partition, it is suggested to use the fast device for the DB and/or WAL instead of an LVM cache.

  • If you are not sure whether to use LVM cache, use the fast device as a WAL and/or DB device.

8.3 Preparation Edit source

You need to split the fast device into multiple partitions. Each OSD needs two partitions for its cache: one for the cache data, and one for the cache metadata. The minimum size of either partition is 2 GB. You can use a single fast device to cache multiple OSDs. It simply needs to be partitioned accordingly.

8.4 Configuring LVM cache Edit source

You can find detailed information about adding, removing, and configuring LVM cache by running the ceph-volume lvmcache command.

8.4.1 Adding LVM cache Edit source

To add LVM cache to an existing OSD, use the following command:

cephadm@osd > ceph-volume lvmcache add
 --cachemetadata METADATA-PARTITION
 --cachedata DATA-PARTITION
 --osd-id OSD-ID

The optional --data, --db or --wal specifies which partition to cache. Default is --data.

Tip
Tip: Specify Logical Volume (LV)

Alternatively, you can use the --origin instead of the --osd-id option to specify which LV to cache:

[...]
--origin VOLUME-GROUP/LOGICAL-VOLUME

8.4.2 Removing LVM cache Edit source

To remove existing LVM cache from an OSD, use the following command:

cephadm@osd > ceph-volume lvmcache rm --osd-id OSD-ID

8.4.3 Setting LVM cache Mode Edit source

To specify caching mode, use the following command:

cephadm@osd > ceph-volume lvmcache mode --set CACHING-MODE --osd-id OSD-ID

CACHING-MODE is either 'writeback' (default) or 'writethrough'

8.5 Handling Failures Edit source

If the caching device fails, all OSDs behind the caching device need to be removed from the cluster (see Section 2.7, “Removing an OSD”), purged, and redeployed. If the OSD drive fails, the OSD's LV as well as its cache's LV will be active but not functioning. Use pvremove PARTITION to purge the partitions (physical volumes) used for the OSD's cache data and metadata partitions. You can use pvs to list all physical volumes.

8.6 Frequently Asked Questions Edit source

Q: 1. What happens if an OSD is removed?

When removing the OSD's LV using lvremove, the cache LVs will be removed as well. However, you will still need to call pvremove on the partitions to make sure all labels have been wiped out.

Q: 2. What happens if the OSD is zapped using ceph-volume zap?

The same answer applies as to the question What happens if an OSD is removed?

Q: 3. What happens if the origin drive fails?

The cache LVs still exist and cache info still shows them as being available. You will not be able to uncache because LVM will fail to flush the cache as the origin LV's device is gone. The situation now is that the origin LV exists, but its backing device does not. You can fix it by using the pvs command and locating the devices that are associated with the origin LV. You can then remove them using

cephadm@osd > sudo pvremove /dev/DEVICE or PARTITION

You can do the same for the cache partitions. This procedure will make the origin LV as well as the cache LVs disappear. You can also use

cephadm@osd > sudo dd if=/dev/zero of=/dev/DEVICE or PARTITION

to wipe them out before using pvremove.

Print this page