Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Enterprise Storage 6

3 Introduction to Tuning SUSE Enterprise Storage Clusters Edit source

Tuning a distributed cluster is a foray into the use of the scientific method, backed with iterative testing. By taking a holistic look at the cluster and then delving into all the components, it is possible to achieve dramatic improvements. Over the course of the work that contributed to the authoring of this guide, the authors have seen performance more than double in some specific cases.

This guide is intended to assist the reader in understanding the what and how of tuning a SUSE Enterprise Storage cluster. There are topics that are beyond the scope of this guide, and it is expected that there are further tweaks that may be performed to an individual cluster in order to achieve optimum performance in a particular end user environment.

This reference guide is targeted at architects and administrators who need to tune their SUSE Enterprise Storage cluster for optimal performance. Familiarity with Linux and Ceph are assumed.

3.1 Philosophy of Tuning Edit source

Tuning requires looking at the entire system being tuned and approaching the process with scientific rigor. An exhaustive approach requires taking an initial baseline and altering a single variable at a time, measuring the result, and then reverting it back to default while moving on to the next tuning parameter. At the end of that process, it is then possible to examine the results, whether it be increased throughput, reduced latency, reduced CPU consumption, and then decide which are likely candidates for combining for additive benefit. This second phase should also be iterated through in the same fashion as the first. This general process would be continued until all possible combinations were tried and the optimal settings discovered.

Unfortunately, few have the time to perform such an exhaustive effort. This being reality, it is possible to utilize some knowledge and begin an iterative process of combining and measuring well-known candidates for performance improvements and measuring the resulting changes. That is the process followed during the research that produced this work.

3.2 The Process Edit source

A proper process is required for effective tuning to occur. The tenets of this process are:

Measure

Start with a baseline and measure the same way after each iteration. Make sure you are measuring all relevant dimensions. Discovering that you are CPU-bound only after working through multiple iterations invalidates all the time spent on the iterations.

Document

Document all results for future analysis. Patterns may not be evident until later review.

Discuss

When possible, discuss the results you are seeing with others for their insights and feedback.

Repeat

A single measurement is not a guarantee of repeatability. Performing the same test multiple times helps to better establish an outcome.

Isolate variables

Having only a single change affect the environment at a time may cause the process to be longer, but it also helps validate the particular adjustment being tested.

3.3 Hardware and Software Edit source

This work leveraged for SUSE Enterprise Storage was performed on two models of servers. Any results referenced in this guide are from this specific hardware environment. Variations of the environment can and will have an effect on the results.

Storage nodes:

  • 2U Server

    • 1x Intel Skylake 6124

    • 96 GB RAM

    • Mellanox Dual Port ConnectX-4 100 GbE

    • 12x Intel SSD D3-S4510 960 GB

    • RAID-1 480 GB M.2 Boot Device

Admin, monitor, and protocol gateways:

  • 1U Server

    • 1x Intel Skylake 4112

    • 32 GB RAM

    • Mellanox Dual Port ConnectX-4 100 GbE

    • RAID-1 480 GB M.2 Boot Device

Switches:

  • 2x 32-port 100 GbE

Software:

  • SUSE Enterprise Storage 5.5

  • SUSE Linux Enterprise Server 15 SP1

Note
Note

Limited-use subscriptions are provided with SUSE Enterprise Storage as part of the subscription entitlement.

3.3.1 Performance Metrics Edit source

The performance of storage is measured on two different, but related axes: latency and throughput. In some cases, one is more important than the other. An example of throughput being more important is that of backup use cases, where throughput is the most critical measurement and maximum performance is achieved with larger transfer sizes. Conversely, for a high-performance database, latency is the most important measurement.

Latency is the time from when the request was made to when it was completed. This is usually measured in terms of milliseconds. This performance can be directly tied to CPU clock speed, the system bus, and device performance. Throughput is the measure of an amount of data that can be written or retrieved within a particular time period. The measure is usually in MB/s and GB/s, or perhaps MiB/s and GiB/s.

A third measurement that is often referred to is IOPS. This stands for Input/output Operations Per Second. This measure is somewhat ambiguous as the result is reliant on the size of the I/O operation, what type (read/write), and details about the I/O pattern: fresh write, overwrite, random write, mix of reads and writes. While ambiguous, it is still a valid tool to use when measuring the changes in the performance of your storage environment. For example, it is possible to make adjustments that may not affect the latency of a single 4K sequential write operation, but would allow many more of the operations to happen in parallel, resulting in a change in throughput and IOPS.

3.4 Determining What to Measure Edit source

When tuning an environment, it is important to understand the I/O that is being tuned for. By properly understanding the I/O pattern, it is then possible to match the tests to the environment, resulting in a close simulation of the environment.

Tools that can be useful for understanding the I/O patterns are:

  • iostat

  • blktrace, blkparse

  • systemtap (stap)

  • dtrace

While discussing these tools is beyond the scope of this document, the information they can provide may be very helpful in understanding the I/O profile that needs to be tuned.

3.4.1 Single Thread vs Aggregate IO Edit source

Understanding the requirements of the workload in relation to whether it needs scale-up or scale-out performance is often a key to proper tuning as well, particularly when it comes to tuning the hardware architecture and to creating test scenarios that provide valid information regarding tuning for the application(s) in question.

3.5 Testing Tools and Protocol Edit source

Proper testing involves selection of the right tools. For most performance test cases, use of fio is recommended, as it provides a vast array of options for constructing test cases. For some use cases, such as S3, it may be necessary to use alternative tools to test all phases of I/O.

Tools that are commonly used to simulate I/O are:

  • fio

  • iometer

When performing testing, it is imperative that sound practices be utilized. This involves:

  • Pre-conditioning the media.

  • Ensuring what you are measuring is what you intend to measure.

  • Validate that the results you see make sense.

  • Test each component individually and then in aggregate, one layer at a time.

Print this page