Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Enterprise Storage 6

Part III Operating a Cluster Edit source

15 Introduction

In this part of the manual you will learn how to start or stop Ceph services, monitor a cluster's state, use and modify CRUSH Maps, or manage storage pools.

16 Operating Ceph Services

You can operate Ceph services either using systemd or using DeepSea.

17 Determining Cluster State

When you have a running cluster, you may use the ceph tool to monitor it. Determining the cluster state typically involves checking the status of Ceph OSDs, Ceph Monitors, placement groups, and Metadata Servers.

18 Monitoring and Alerting

In SUSE Enterprise Storage 6, DeepSea no longer deploys a monitoring and alerting stack on the Salt master. Users have to define the Prometheus role for Prometheus and Alertmanager, and the Grafana role for Grafana. When multiple nodes are assigned with the Prometheus or Grafana role, a highly avail…

19 Authentication with cephx

To identify clients and protect against man-in-the-middle attacks, Ceph provides its cephx authentication system. Clients in this context are either human users—such as the admin user—or Ceph-related services/daemons, for example OSDs, monitors, or Object Gateways.

20 Stored Data Management

The CRUSH algorithm determines how to store and retrieve data by computing data storage locations. CRUSH empowers Ceph clients to communicate with OSDs directly rather than through a centralized server or broker. With an algorithmically determined method of storing and retrieving data, Ceph avoids a…

21 Ceph Manager Modules

The architecture of the Ceph Manager (refer to Section 1.2.3, “Ceph Nodes and Daemons” for a brief introduction) allows extending its functionality via modules, such as 'dashboard' (see Part II, “Ceph Dashboard”), 'prometheus' (see Chapter 18, Monitoring and Alerting), or 'balancer'.

22 Managing Storage Pools

Ceph stores data within pools. Pools are logical groups for storing objects. When you first deploy a cluster without creating a pool, Ceph uses the default pools for storing data. The following important highlights relate to Ceph pools:

23 RADOS Block Device

A block is a sequence of bytes, for example a 4 MB block of data. Block-based storage interfaces are the most common way to store data with rotating media, such as hard disks, CDs, floppy disks. The ubiquity of block device interfaces makes a virtual block device an ideal candidate to interact with …

24 Erasure Coded Pools

Ceph provides an alternative to the normal replication of data in pools, called erasure or erasure coded pool. Erasure pools do not provide all functionality of replicated pools (for example, they cannot store metadata for RBD pools), but require less raw storage. A default erasure pool capable of s…

25 Ceph Cluster Configuration

This chapter provides a list of important Ceph cluster settings and their description. The settings are sorted by topic.

Print this page