Salt is capable of applying states by matching minions with relevant state data. This data comes from SUSE Manager in the form of package and custom states.
State data comes from SUSE Manager in the form of package and custom states and targets minions at three specific levels of hierarchy. The state hierarchy is defined by the following order or priority: individual minions have priority on packages and custom states over groups; next a group has priority over the organization.
Minion Level
› ›
Group Level
›
Organization Level
› ›
For example:
Org1 requires that vim version 1 is installed
Group1 requires that vim version 2 is installed
Group2 requires any version installed
This would lead to the following order of hierarchy:
Minion1 part of [Org1, Group1] wants vim removed, vim is removed (Minion Level)
Minion2 part of [Org1, Group1] wants vim version 2 gets version 2 (Group Level)
Minion3 part of [Org1, Group1] wants any version, gets version 2 (Org Level)
Minion4 part of[Org1, Group2] wants any version, gets vim version 1 (Org Level)
The SUSE Manager salt-master reads its state data from three file root locations.
The directory /usr/share/susemanager/salt
is used by SUSE Manager and comes from the susemanager-sls.
It is shipped and updated together with SUSE Manager and includes certificate setup and common state logic to be applied to packages and channels.
The directory /srv/susemanager/salt
is generated by SUSE Manager and based on assigned channels and packages for minions, groups and organizations.
This file will be overwritten and regenerated.
This could be thought of as the SUSE Manager database translated into salt directives.
The third directory /srv/salt
is for custom state data, modules etc.
SUSE Manager does not operate within or utilize this directory.
However the state data placed here affects the Highstate of minions and is merged with the total state result generated by SUSE Manager.
All sls files created by users will be saved to disk on the salt-master server.
These files will be placed in /srv/susemanager/salt/
and each organization will be placed within its own directory.
Although these states are custom, these states are created using SUSE Manager
.
The following provides an overview of directory structure:
├── manager_org_DEVEL │ ├── files │ │ ... files needed by states (uploaded by users)... │ └── state.sls ... other sls files (created by users)... E.g.: ├── manager_org_TESTING │ ├── files │ │ └── motd # user created │ │ ... other files needed by states ... │ └── motd.sls # user created ... other sls files ...
SUSE Manager exposes a small amount of internal data as Pillars which can be used with custom SUSE Linux Enterprise Server states. Data that is exposed includes group membership, organization membership, and file roots. These are managed either automatically by SUSE Manager, or manually by the user.
To avoid hard-coding organization IDs within SUSE Linux Enterprise Server files, a pillar entry is added for each organization:
org-files-dir: relative_path_to_files
The specified file is available for all minions which belong to the organization.
This is an example of a Pillar located at /etc/motd
:
file.managed: - source: salt://{{ pillar['org-files-dir']}}/motd - user: root - group: root - mode: 644
Pillar data can be used to perform bulk actions, like applying all assigned states to minions within the group. This section contains some example of bulk actions that you can take using group states.
In order to perform these actions, you will need to determine the ID of the group that you want to manipulate.
You can determine the Group ID by using the spacecmd
command:
spacecmd group_details
In these examples we will use an example Group ID of GID
.
To apply all states assigned to the group:
salt -I 'group_ids:GID' state.apply custom.group_GID
To apply any state (whether or not it is assigned to the group):
salt -I 'group_ids:GID' state.apply ``state``
To apply a custom state:
salt -I 'group_ids:2130' state.apply manager_org_1.``customstate``
Apply the highstate to all minions in the group:
salt -I 'group_ids:GID' state.apply