Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Enterprise Storage 7.1 Documentation / Deployment Guide / Deploying Ceph Cluster / Deploying Salt
Applies to SUSE Enterprise Storage 7.1

6 Deploying Salt

SUSE Enterprise Storage uses Salt and ceph-salt for the initial cluster preparation. Salt helps you configure and run commands on multiple cluster nodes simultaneously from one dedicated host called the Salt Master. Before deploying Salt, consider the following important points:

  • Salt Minions are the nodes controlled by a dedicated node called Salt Master.

  • If the Salt Master host should be part of the Ceph cluster, it needs to run its own Salt Minion, but this is not a requirement.

    Tip
    Tip: Sharing multiple roles per server

    You will get the best performance from your Ceph cluster when each role is deployed on a separate node. But real deployments sometimes require sharing one node for multiple roles. To avoid trouble with performance and the upgrade procedure, do not deploy the Ceph OSD, Metadata Server, or Ceph Monitor role to the Admin Node.

  • Salt Minions need to correctly resolve the Salt Master's host name over the network. By default, they look for the salt host name, but you can specify any other network-reachable host name in the /etc/salt/minion file.

  1. Install the salt-master on the Salt Master node:

    root@master # zypper in salt-master
  2. Check that the salt-master service is enabled and started, and enable and start it if needed:

    root@master # systemctl enable salt-master.service
    root@master # systemctl start salt-master.service
  3. If you intend to use the firewall, verify that the Salt Master node has ports 4505 and 4506 open to all Salt Minion nodes. If the ports are closed, you can open them using the yast2 firewall command by allowing the salt-master service for the appropriate zone. For example, public.

  4. Install the package salt-minion on all minion nodes.

    root@minion > zypper in salt-minion
  5. Edit /etc/salt/minion and uncomment the following line:

    #log_level_logfile: warning

    Change the warning log level to info.

    Note
    Note: log_level_logfile and log_level

    While log_level controls which log messages will be displayed on the screen, log_level_logfile controls which log messages will be written to /var/log/salt/minion.

    Note
    Note

    Ensure you change the log level on all cluster (minion) nodes.

  6. Make sure that the fully qualified domain name of each node can be resolved to an IP address on the public cluster network by all the other nodes.

  7. Configure all minions to connect to the master. If your Salt Master is not reachable by the host name salt, edit the file /etc/salt/minion or create a new file /etc/salt/minion.d/master.conf with the following content:

    master: host_name_of_salt_master

    If you performed any changes to the configuration files mentioned above, restart the Salt service on all related Salt Minions:

    root@minion > systemctl restart salt-minion.service
  8. Check that the salt-minion service is enabled and started on all nodes. Enable and start it if needed:

    # systemctl enable salt-minion.service
    # systemctl start salt-minion.service
  9. Verify each Salt Minion's fingerprint and accept all salt keys on the Salt Master if the fingerprints match.

    Note
    Note

    If the Salt Minion fingerprint comes back empty, make sure the Salt Minion has a Salt Master configuration and that it can communicate with the Salt Master.

    View each minion's fingerprint:

    root@minion > salt-call --local key.finger
    local:
    3f:a3:2f:3f:b4:d3:d9:24:49:ca:6b:2c:e1:6c:3f:c3:83:37:f0:aa:87:42:e8:ff...

    After gathering fingerprints of all the Salt Minions, list fingerprints of all unaccepted minion keys on the Salt Master:

    root@master # salt-key -F
    [...]
    Unaccepted Keys:
    minion1:
    3f:a3:2f:3f:b4:d3:d9:24:49:ca:6b:2c:e1:6c:3f:c3:83:37:f0:aa:87:42:e8:ff...

    If the minions' fingerprints match, accept them:

    root@master # salt-key --accept-all
  10. Verify that the keys have been accepted:

    root@master # salt-key --list-all
  11. Test whether all Salt Minions respond:

    root@master # salt-run manage.status