Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Enterprise Storage 7.1 Documentation / Administration and Operations Guide / Ceph Dashboard / View cluster internals
Applies to SUSE Enterprise Storage 7.1

4 View cluster internals

The Cluster menu item lets you view detailed information about Ceph cluster hosts, inventory, Ceph Monitors, services, OSDs, configuration, CRUSH Map, Ceph Manager, logs, and monitoring files.

4.1 Viewing cluster nodes

Click Cluster › Hosts to view a list of cluster nodes.

Hosts
Figure 4.1: Hosts

Click the drop-down arrow next to a node name in the Hostname column to view the performance details of the node.

The Services column lists all daemons that are running on each related node. Click a daemon name to view its detailed configuration.

4.2 Listing physical disks

Click Cluster › Physical Disks to view a list of physical disks. The list includes the device path, type, availability, vendor, model, size, and the OSDs.

Click to select a node name in the Hostname column. When selected, click Identify to identify the device the host is running on. This tells the device to blink its LEDs. Select the duration of this action between 1, 2, 5, 10, or 15 minutes. Click Execute.

Physical disks
Figure 4.2: Physical disks

4.3 Viewing Ceph Monitors

Click Cluster › Monitors to view a list of cluster nodes with running Ceph monitors. The content pane is split into two views: Status, and In Quorum or Not In Quorum.

The Status table shows general statistics about the running Ceph Monitors, including the following:

  • Cluster ID

  • monmap modified

  • monmap epoch

  • quorum con

  • quorum mon

  • required con

  • required mon

The In Quorum and Not In Quorum panes include each monitor's name, rank number, public IP address, and number of open sessions.

Click a node name in the Name column to view the related Ceph Monitor configuration.

Ceph Monitors
Figure 4.3: Ceph Monitors

4.4 Displaying services

Click Cluster › Services to view details on each of the available services: crash, Ceph Manager, and Ceph Monitors. The list includes the container image name, container image ID, status of what is running, size, and when it was last refreshed.

Click the drop-down arrow next to a service name in the Service column to view details of the daemon. The detail list includes the host name, daemon type, daemon ID, container ID, container image name, container image ID, version number, status, and when it was last refreshed.

Services
Figure 4.4: Services

4.4.1 Adding new cluster services

To add a new service to a cluster, click the Create button in the top left corner of the Services table.

In the Create Service window, specify the type of the service and then fill the required options that are relevant for the service you previously selected. Confirm with Create Service.

An overlay window with the new service specification
Figure 4.5: Creating a new cluster service

4.5 Displaying Ceph OSDs

Click Cluster › OSDs to view a list of nodes with running OSD daemons. The list includes each node's name, ID, status, device class, number of placement groups, size, usage, reads/writes chart in time, and the rate of read/write operations per second.

Ceph OSDs
Figure 4.6: Ceph OSDs

Select Flags from the Cluster-wide configuration drop-down menu in the table heading to open a pop-up window. This has a list of flags that apply to the whole cluster. You can activate or deactivate individual flags, and confirm with Submit.

OSD flags
Figure 4.7: OSD flags

Select Recovery Priority from the Cluster-wide configuration drop-down menu in the table heading to open a pop-up window. This has a list of OSD recovery priorities that apply to the whole cluster. You can activate the preferred priority profile and fine-tune the individual values below. Confirm with Submit.

OSD recovery priority
Figure 4.8: OSD recovery priority

Click the drop-down arrow next to a node name in the Host column to view an extended table with details about the device settings and performance. Browsing through several tabs, you can see lists of Attributes, Metadata, Device health, Performance counter, a graphical Histogram of reads and writes, and Performance details.

OSD details
Figure 4.9: OSD details
Tip
Tip: Performing specific tasks on OSDs

After you click an OSD node name, the table row is highlighted. This means that you can now perform a task on the node. You can choose to perform any of the following actions: Edit, Create, Scrub, Deep Scrub, Reweight, Mark out, Mark In, Mark Down, Mark Lost, Purge, Destroy, or Delete.

Click the down arrow in the top left of the table heading next to the Create button and select the task you want to perform.

4.5.1 Adding OSDs

To add new OSDs, follow these steps:

  1. Verify that some cluster nodes have storage devices whose status is available. Then click the down arrow in the top left of the table heading and select Create. This opens the Create OSDs window.

    Create OSDs
    Figure 4.10: Create OSDs
  2. To add primary storage devices for OSDs, click Add. Before you can add storage devices, you need to specify filtering criteria in the top right of the Primary devices table—for example Type hdd. Confirm with Add.

    Adding primary devices
    Figure 4.11: Adding primary devices
  3. In the updated Create OSDs window, optionally add shared WAL and BD devices, or enable device encryption.

    Create OSDs with primary devices added
    Figure 4.12: Create OSDs with primary devices added
  4. Click Preview to view the preview of DriveGroups specification for the devices you previously added. Confirm with Create.

    Figure 4.13:
  5. New devices will be added to the list of OSDs.

    Newly added OSDs
    Figure 4.14: Newly added OSDs
    Note
    Note

    There is no progress visualization of the OSD creation process. It takes some time before they are actually created. The OSDs will appear in the list when they have been deployed. If you want to check the deployment status, view the logs by clicking Cluster › Logs.

4.6 Viewing cluster configuration

Click Cluster › Configuration to view a complete list of Ceph cluster configuration options. The list contains the name of the option, its short description, and its current and default values, and whether the option is editable.

Cluster configuration
Figure 4.15: Cluster configuration

Click the drop-down arrow next to a configuration option in the Name column to view an extended table with detailed information about the option, such as its type of value, minimum and maximum permitted values, whether it can be updated at runtime, and many more.

After highlighting a specific option, you can edit its value(s) by clicking the Edit button in the top left of the table heading. Confirm changes with Save.

4.7 Viewing the CRUSH Map

Click Cluster › CRUSH map to view a CRUSH Map of the cluster. For more general information on CRUSH Maps, refer to Section 17.5, “CRUSH Map manipulation”.

Click the root, nodes, or individual OSDs to view more detailed information, such as crush weight, depth in the map tree, device class of the OSD, and many more.

CRUSH Map
Figure 4.16: CRUSH Map

4.8 Viewing manager modules

Click Cluster › Manager modules to view a list of available Ceph Manager modules. Each line consists of a module name and information on whether it is currently enabled or not.

Manager modules
Figure 4.17: Manager modules

Click the drop-down arrow next to a module in the Name column to view an extended table with detailed settings in the Details table below. Edit them by clicking the Edit button in the top left of the table heading. Confirm changes with Update.

Click the drop-down arrow next to the Edit button in the top left of the table heading to Enable or Disable a module.

4.9 Viewing logs

Click Cluster › Logs to view a list of cluster's recent log entries. Each line consists of a time stamp, the type of the log entry, and the logged message itself.

Click the Audit Logs tab to view log entries of the auditing subsystem. Refer to Section 11.5, “Auditing API requests” for commands to enable or disable auditing.

Logs
Figure 4.18: Logs

4.10 Viewing monitoring

Click Cluster › Monitoring to manage and view details on Prometheus alerts.

If you have Prometheus active, in this content pane you can view detailed information on Active Alerts, All Alerts, or Silences.

Note
Note

If you do not have Prometheus deployed, an information banner will appear and link to relevant documentation.