SUSE Manager provides several methods for communication between client and server. All commands your SUSE Manager server sends its clients to do will be routed through one of them. Which one you select will depend on your network infrastructure. The following sections provide a starting point for selecting a method which best suits your network environment.
This chapter is only relevant for traditional clients as Salt clients (minions) utilize a Salt specific contact method. For general information about Salt clients, see Book “Getting Started”, Chapter 6 “Getting Started with Salt”, Section 6.1 “Introduction”.
The SUSE Manager daemon (rhnsd
) runs on traditional client systems and periodically connects with SUSE Manager to check for new updates and notifications.
The daemon is started by /etc/init.d/rhnsd
.
It is only still in use on SUSE Linux Enterprise 11 and Red Hat Enterprise Linux Server 6-these are systems that are not based on systemd.
On later systems, a systemd timer (rhnsd.timer
) is used and controlled by rhnsd.service
.
By default, rhnsd will check every 4 hours for new actions, therefore it may take some time for your clients to begin updating after actions have been scheduled for them.
To check for updates, rhnsd
runs the external mgr_check
program located in /usr/sbin/
.
This is a small application that establishes the network connection to SUSE Manager.
The SUSE Manager daemon does not listen on any network ports or talk to the network directly.
All network activity is done via the mgr_check
utility.
When new packages or updates are installed on the client using SUSE Manager, any end user licence agreements (EULAs) are automatically accepted. To review a package EULA, open the package detail page in the Web UI.
This figure provides an overview of the default rhnsd
process path.
All items left of the Python XMLRPC server
block represent processes running on a SUSE Manager client.
The SUSE Manager daemon can be configured by editing the file on the client:
/etc/sysconfig/rhn/rhnsd
This is the configuration file the rhnsd initialization script uses. An important parameter for the daemon is its check-in frequency. The default interval time is four hours (240 minutes). If you modify the configuration file, you must as root restart the daemon with:
/etc/init.d/rhnsd restart
The minimum allowed time interval is one hour (60 minutes). If you set the interval below one hour, it will change back to the default of 4 hours (240 minutes).
On systemd-based systems (for example, SLE 12 and later), the default time interval is set in /etc/systemd/system/timers.target.wants/rhnsd.timer
:
[Timer] OnCalendar=00/4:00 RandomizedDelaySec=30min
You can create an overriding drop-in file of rhnsd.timer
with:
systemctl edit rhnsd.timer
For example, if you want configure a two hour time interval, enter:
[Timer] OnCalendar=00/2:00
On write, the file will be saved as /etc/systemd/system/rhnsd.timer.d/override.conf
.
For more information about system timers, see the manpages of systemd.timer
and systemctl
.
As the root you may view the status of the rhnsd
daemon with:
/etc/init.d/rhnsd status
And the status of the rhnsd
service with:
service rhnsd status
Push via SSH is intended to be used in environments where your clients cannot reach the SUSE Manager server directly to regularly check in and, for example, fetch package updates.
In detail, this feature enables a SUSE Manager located within an internal network to manage clients located on a “Demilitarized Zone” (DMZ) outside of the firewall protected network. Due to security reasons, no system on a DMZ is authorized to open a connection to the internal network and therefore your SUSE Manager server. The solution is to configure Push via SSH which utilizes an encrypted tunnel from your SUSE Manager server on the internal network to the clients located on the DMZ. After all actions/events are executed, the tunnel is closed. The server will contact the clients in regular intervals (using SSH) to check in and perform all actions and events.
Certain actions are currently not supported on scheduled clients which are managed via Push via SSH. This includes re-installation of systems using the provisioning module.
The following figure provides an overview of the Push via SSH process path.
All items left of the Taskomatic
block represent processes running on a SUSE Manager client.
For tunneling connections via SSH, two available port numbers are required, one for tunneling HTTP and the second for tunneling via HTTPS (HTTP is only necessary during the registration process). The port numbers used by default are 1232
and 1233
.
To overwrite these, add two custom port numbers greater than 1024 to /etc/rhn/rhn.conf
like this:
ssh_push_port_http = high port 1 ssh_push_port_https = high port 2
If you would like your clients to be contacted via their hostnames instead of an IP address, set the following option:
ssh_push_use_hostname = true
It is also possible to adjust the number of threads to use for opening client connections in parallel.
By default two parallel threads are used.
Set taskomatic.ssh_push_workers
in /etc/rhn/rhn.conf
like this:
taskomatic.ssh_push_workers = number
For security reasons you may desire to use sudo and SSH into a system as a user other than root . The following procedure will guide you through configuring sudo for use with Push via SSH.
The packages spacewalk-taskomatic >= 2.1.165.19
and spacewalk-certs-tools ⇒ 2.1.6.7
are required for using sudo with Push via SSH.
Set the following parameter on the server located in /etc/rhn/rhn.conf
.
ssh_push_sudo_user =`user`
The server will use sudo to ssh as the configured user
.
You must create the user specified in Procedure: Configuring sudo on each of your clients and the following parameters should be commented out within each client’s /etc/sudoers
file:
#Defaults targetpw # ask for the password of the target user i.e. root #ALL ALL=(ALL) ALL # WARNING! Only use this together with 'Defaults targetpw'!
Add the following lines beneath the \## User privilege specification
section of each client’s /etc/sudoers
file:
<user> ALL=(ALL) NOPASSWD:/usr/sbin/mgr_check <user> ALL=(ALL) NOPASSWD:/home/<user>/enable.sh <user> ALL=(ALL) NOPASSWD:/home/<user>/bootstrap.sh
On each client add the following two lines to the /home/user/.bashrc
file:
PATH=$PATH:/usr/sbin export PATH
As your clients cannot reach the server, you will need to register your clients from the server.
A tool for performing registration of clients from the server is included with SUSE Manager and is called mgr-ssh-push-init
.
This tool expects a client’s hostname or IP address and the path to a valid bootstrap script located in the server’s filesystem for registration as parameters.
The ports for tunneling need to be specified before the first client is registered. Clients already registered before changing the port numbers must be registered again, otherwise the server will not be able to contact them anymore.
mgr-ssh-push-init
Disables rhnsdThe mgr-ssh-push-init
command disables the rhnsd
daemon which normally checks for updates every 4 hours.
Because your clients cannot reach the server without using the Push via SSH contact method, the rhnsd
daemon is disabled.
For registration of systems which should be managed via the Push via SSH tunnel contact method, it is required to use an activation key that is configured to use this method.
Normal Push via SSH
is unable to reach the server.
For managing activation keys, see Chapter 7, Activation Key Management.
Run the following command as root on the server to register a client:
# mgr-ssh-push-init --client client --register \ /srv/www/htdocs/pub/bootstrap/bootstrap_script --tunnel
To enable a client to be managed using Push via SSH (without tunneling), the same script may be used.
Registration is optional since it can also be done from within the client in this case. mgr-ssh-push-init
will also automatically generate the necessary SSH key pair if it does not yet exist on the server:
# mgr-ssh-push-init --client`client`--register bootstrap_script
When using the Push via SSH tunnel contact method, the client is configured to connect SUSE Manager via the high ports mentioned above (see /etc/sysconfig/rhn/up2date
). Tools like rhn_check
and zypper
will need an active SSH session with the proper port forwarding options in order to access the SUSE Manager API.
To verify the Push via SSH tunnel connection manually, run the following command on the SUSE Manager server:
# ssh -i /root/.ssh/id_susemanager -R high port: susemanager :443`client`zypper ref
The contact method to be used for managing a server can also be modified via the API.
The following example code (python) shows how to set a system’s contact method to ssh-push
.
Valid values are:
default
(pull)
ssh-push
ssh-push-tunnel
client = xmlrpclib.Server(SUMA_HOST + "/rpc/api", verbose=0) key = client.auth.login(SUMA_LOGIN, SUMA_PASSWORD) client.system.setDetails(key, 1000012345, {'contact_method' : 'ssh-push'})
When a system should be migrated and managed using Push via SSH, it requires setup using the mgr-ssh-push-init
script before the server can connect via SSH.
This separate command requires human interaction to install the server’s SSH key onto the managed client (root
password). The following procedure illustrates how to migrate an already registered system:
Setup the client using the mgr-ssh-push-init
script (without --register
).
Change the client’s contact method to ssh-push
or ssh-push-tunnel
respectively (via API or Web UI).
Existing activation keys can also be edited via API to use the Push via SSH contact method for clients registered with these keys:
client.activationkey.setDetails(key, '1-mykey', {'contact_method' : 'ssh-push'})
It is possible to use Push via SSH to manage systems that are connected to the SUSE Manager server via a proxy.
To register a system, run mgr-ssh-push-init
on the proxy system for each client you wish to register.
Update your proxy with the latest packages to ensure the registration tool is available.
It is necessary to copy the ssh key to your proxy.
This can be achieved by executing the following command from the server:
{prompt.root}mgr-ssh-push-init --client`proxy`
Push via Salt SSH is intended to be used in environments where your Salt clients cannot reach the SUSE Manager server directly to regularly checking in and, for example, fetch package updates.
This feature is not related to Push via SSH for the traditional clients. For Push via SSH, see Section 8.3, “Push via SSH”.
Salt provides “Salt SSH”
(salt-ssh
), a feature to manage clients from a server.
It works without installing Salt related software on clients.
Using Salt SSH there is no need to have minions connected to the Salt master.
Using this as a SUSE Manager connect method, this feature provides similar functionality for Salt clients as the traditional Push via SSH feature for traditional clients.
This feature allows:
Managing Salt entitled systems with the Push via SSH contact method using Salt SSH.
Bootstrapping such systems.
SSH daemon must be running on the remote system and reachable by the salt-api
daemon (typically running on the SUSE Manager server).
Python must be available on the remote system (Python must be supported by the installed Salt). Currently: python 2.6.
Red Hat Enterprise Linux and CentOS versions ⇐ 5 are not supported because they do not have Python 2.6 by default.
To bootstrap a Salt SSH system, proceed as follows:
Open the
› ).Fill out the required fields. Select an Book “Reference Manual”, Chapter 7 “Systems”, Section 7.9 “Systems > Activation Keys”.
› contact method configured. For more information about activation keys, seeCheck the
option.Confirm with clicking the
button.Now the system will be bootstrapped and registered in SUSE Manager. If done successfully, it will appear in the
list.There are two kinds of parameters for Push via Salt SSH:
Bootstrap-time parameters - configured in the
page:Host
Activation key
Password - used only for bootstrapping, not saved anywhere; all future SSH sessions are authorized via a key/certificate pair
Persistent parameters - configured SUSE Manager-wide:
sudo user - same as in Section 8.3.2, “Using sudo with Push via SSH”.
The Push via Salt SSH feature uses a taskomatic job to execute scheduled actions using salt-ssh
.
The taskomatic job periodically checks for scheduled actions and executes them.
While on traditional clients with SSH push configured only rhn_check
is executed via SSH, the Salt SSH push job executes a complete salt-ssh
call based on the scheduled action.
OpenSCAP auditing is not available on Salt SSH minions.
Beacons do not work with Salt SSH.
Installing a package on a system using zypper
will not invoke the package refresh.
Virtual Host functions (for example, a host to guests) will not work if the virtual host system is Salt SSH-based.
For more information, see
OSAD is an alternative contact method between SUSE Manager and its clients.
By default, SUSE Manager uses rhnsd
, which contacts the server every four hours to execute scheduled actions.
OSAD allows registered client systems to execute scheduled actions immediately.
rhnsd
RunningUse OSAD only in addition to rhnsd
.
If you disable rhnsd
your client will be shown as not checking in after 24 hours.
OSAD has several distinct components:
The osa-dispatcher
service runs on the server, and uses database checks to determine if clients need to be pinged, or if actions need to be executed.
The osad
service runs on the client. It responds to pings from osa-dispatcher
and runs mgr_check
to execute actions when directed to do so.
The jabberd
service is a daemon that uses the XMPP
protocol for communication between the client and the server.
The jabberd
service also handles authentication.
The mgr_check
tool runs on the client to execute actions.
It is triggered by communication from the osa-dispatcher
service.
The osa-dispatcher
periodically runs a query to check when clients last showed network activity.
If it finds a client that has not shown activity recently, it will use jabberd
to ping all osad
instances running on all clients registered with your SUSE Manager server.
The osad
instances respond to the ping using jabberd
, which is running in the background on the server.
When the osa-dispatcher
receives the response, it marks the client as online.
If the osa-dispatcher
fails to receive a response within a certain period of time, it marks the client as offline.
When you schedule actions on an OSAD-enabled system, the task will be carried out immediately.
The osa-dispatcher
periodically checks clients for actions that need to be executed.
If an outstanding action is found, it uses jabberd
to execute mgr_check
on the client, which will then execute the action.
This section covers enabling the osa-dispatcher
and osad
services, and performing initial setup.
OSAD clients use the fully qualified domain name (FQDN) of the server to communicate with the osa-dispatcher
service.
SSL is required for osad
communication.
If SSL certificates are not available, the daemon on your client systems will fail to connect.
Make sure your firewall rules are set to allow the required ports.
For more information, see Table 1.1, “Required Server Ports”.
On your SUSE Manager server, as the root user, start the osa-dispatcher
service:
systemctl start osa-dispatcher
On each client machine, install the osad
package from the Tools
child channel.
The osad
package should be installed on clients only.
If you install the osad
package on your SUSE Manager Server, it will conflict with the osa-dispatcher
package.
On the client systems, as the root user, start the osad
service:
systemctl start osad
Because osad
and osa-dispatcher
are run as services, you can use standard commands to manage them, including stop
, restart
, and status
.
Configuration and Log Files. Each OSAD component is configured by local configuration files. We recommend you keep the default configuration parameters for all OSAD components.
Component | Location | Path to Configuration File |
---|---|---|
| Server |
|
| Client |
|
| Client |
|
| Both |
|
Troubleshooting OSAD. If your OSAD clients cannot connect to the server, or if the jabberd
service takes a lot of time responding to port 5552, it could be because you have exceeded the open file count.
Every client needs one always-open TCP connection to the server, which consumes a single file handler.
If the number of file handlers currently open exceeds the maximum number of files that jabberd
is allowed to use, jabberd
will queue the requests, and refuse connections.
To resolve this issue, you can increase the file limits for jabberd
by editing the /etc/security/limits.conf
configuration file and adding these lines:
jabbersoftnofile5100 jabberhardnofile6000
Calculate the limits required for your environment by adding 100 to the number of clients for the soft limit, and 1000 to the current number of clients for the soft limit. In the example above, we have assumed 500 current clients, so the soft limit is 5100, and the hard limit is 6000.
You will also need to update the max_fds
parameter in the /etc/jabberd/c2s.xml
file with your chosen hard limit:
<max_fds>6000</max_fds>