suma_proxy
patternconfigure-proxy.sh
In this manual if not other specified, SUSE Manager version 3.2 is assumed and this version is required if a feature is discussed. SUSE Manager 3.2 and SUSE Manager 3.2 Proxy were originally released as a SLES 12 SP3 extension. With the next maintenance update (December 2018), SUSE Manager 3.2 and SUSE Manager 3.2 Proxy will be based on SLES 12 SP4 and support SLE 12 SP4 clients officially. In the following sections and chapters, it is highly recommended to use SLE 12 SP4 instead of SP3. Whenever features of the SUSE Manager 3.2 host operating system are documented and not other specified version 12 SP4 is assumed.
This best practice guide is intended for z/VM administrators responsible for operating the IBM z Systems Mainframe. The goal of this guide is to lead an z/VM administrator trained on normal z Systems operating protocols through the installation of SUSE Manager onto an existing mainframe system. The intent of this article is not to cover the variety of hardware configuration profiles available on z Systems but instead to provide a foundational overview of the procedure and requirements necessary for a successful SUSE Manager server deployment.
The z/VM administrator should acquire and prepare the following resources for a successful SUSE Manager installation. SUSE Manager3.2 is delivered as an extension. These sections will provide you with the minimum and recommended system requirements for SUSE Manager . The base system for SUSE Manager 3.2 is SLES 12SP4 .
Hardware | Recommended Hardware |
---|---|
IBM Systems | * IBM zEnterprise System z196 (z196) * IBM zEnterprise System z114 (z114) * IBM zEnterprise EC12 (zEC12) * IBM zEnterprise EC12 (zEC12) * IBM zEnterprise BC12 (zBC12) * IBM z13 (z13) * LinuxOne Rockhopper * LinuxOne Emperor |
RAM | Split memory requirements across available RAM, VDISK and swap to suit your environment. On a production system the ratio of physical memory to VDISK will need to be re-evaluated based on the number of clients which will be supported. Minimum 5 GB+ for test server (3 GB RAM + 2 GB VDISK Swap) Minimum 16 GB+ for base installation Minimum 32 GB+ for a production server |
Free Disk Space | Minimum 100 GB for root partition Minimum 50
GB for Minimum 50
GB per SUSE
product + 100 GB per Red Hat product |
Network Connection | * OSA Express Ethernet (including Fast and Gigabit Ethernet) _ * HiperSockets or Guest LAN * 10 GBE, VSWITCH * RoCE _(RDMA over Converged Ethernet) The following interfaces are still included but no longer supported: * CTC (or virtual CTC) * IP network interface for IUCV |
SUSE Linux Enterprise 12SP4 Installation Media for IBM z Systems :
There are a few additional resource requirements you will need to prepare before installing the SUSE Manager extension on your system. This section overviews these requirements.
The guest z/VM should be provided with a static IP address and hostname as these cannot be easily changed after initial setup. The hostname should contain less than 8 characters.
For more information on SUSE Manager additional requirements, see https://www.suse.com/documentation/suse-manager-3/book_suma_best_practices/data/mgr_conceptual_overview.html.
You will need to ensure you have sufficient disk storage for SUSE Manager
before running yast2 susemanagersetup
.
This section explains these requirements in more detail.
By default the file system of SUSE Manager, including the embedded database and patch directories, reside within the root volume. While adjustments are possible once installation is complete, it is the administrator’s responsibility to specify and monitor these adjustments.
If your SUSE Manager runs out of disk space, this can have a severe impact on its database and file structure. Preparing storage requirements in accordance with this section will aid in preventing these harmful effects. SUSE technical services will not be able to provide support for systems suffering from low disk space conditions as this can have an effect on an entire system and therefore becomes unresolvable. A full recovery is only possible with a previous backup or a new SUSE Manager installation.
Required Storage Devices. An additional disk is required for database storage.
This should be an zFCP
or DASD
device as these are preferred for use with HYPERPAV
.
The disk should fulfill the following requirements:
At least 50 GB for /var/lib/pgsql
At least 50 GB for each SUSE product in /var/spacewalk
At least 100 GB for each Red Hat product in /var/spacewalk
Reclaiming Disk Space. If you need to reclaim more disk space, try these suggestions:
Remove custom channels (you cannot remove official SUSE channels)
Use the spacewalk-data-fsck --help
command to compare the spacewalk database to the filesystem and remove entries if either is missing.
This section covers the installation of SUSE Manager3.2 as an extension to SLES 12SP4 .
For more information on deploying SLES 12SP4 on your hardware, see https://www.suse.com/documentation/sles-12/book_sle_deployment/data/cha_zseries.html.
During installation of SLES 12SP4 select SUSE Manager as an extension.
After rebooting you will need to set up the additional storage required for /var/spacewalk
and /var/lib/pgsql
and swap space using the yast partitioner tool.
This step is required before running yast2 susemanagersetup
.
After configuring the storage requirements, having executed a YaST online update and completed a full system reboot, run SUSE Manager setup to finalize the SUSE Manager installation on your z Systems mainframe:
{prompt.root}yast2 susemanagersetup
This completes the installation of SUSE Manager on your z Systems . For more information on beginning management with SUSE Manager , see Setup SUSE Manager with YaST.
suma_proxy
patternconfigure-proxy.sh
This chapter explains how to install and set up SUSE Manager 3.2 Proxy. It also provides notes about migrating a previous proxy to version 3.2.
SUSE Manager 3.2 Proxy is a SUSE Manager add-on that caches software packages on an internal, central server. The proxy caches patch updates from SUSE or custom RPMs generated by third-party organizations. A proxy allows you to use bandwidth more effectively because client systems connect to the proxy for updates, and the SUSE Manager server is no longer required to handle all client requests. A SUSE Manager Proxy can serve both Traditional and Salt clients. The proxy also supports transparent custom package deployment.
SUSE Manager Proxy is an open source (GPLv2) solution that provides the following features:
Cache software packages within a Squid proxy.
Client systems see the SUSE Manager Proxy as a SUSE Manager server instance.
The SUSE Manager Proxy is registered as a client system with the SUSE Manager server.
The primary goal of a SUSE Manager Proxy is to improve SUSE Manager performance by reducing bandwidth requirements and accelerating response time.
The following section provides SUSE Manager Proxy requirements.
Supported Client Systems. For supported clients and their requirements, see Book “Getting Started”, Chapter 1 “Introduction”, Section 1.2 “Prerequisites for Installation”, Section 1.2.5 “Supported Client Systems”.
Hardware Requirements. Hardware requirements highly depend on your usage scenario. When planning proxy environments, consider the amount of data you want to cache on your proxy. If your proxy should be a 1:1 mirror of your SUSE Manager, the same amount of disk space is required. For specific hardware requirements, see the following table.
Hardware | Required |
---|---|
CPU | Multi-core 64-bit CPU (x86_64). |
RAM | Minimum 4 GB for a non-production server |
Minimum 16 GB for a production server | |
Free Disk Space | Minimum 100 GB for base installation and at least 50 GB for caching per SUSE product and +100 GB per Red Hat product; a resizeable partition strongly recommended. |
SUSE recommends storing the squid proxy caching data on a separate disk formatted with the XFS file system.
SSL Certificate Password. For installing the proxy, you need the SSL certificate password entered during the initial installation of SUSE Manager.
Network Requirements. For additional network requirements, see Book “Getting Started”, Chapter 1 “Introduction”, Section 1.2 “Prerequisites for Installation”, Section 1.2.4 “Network Requirements”.
SUSE Customer Center. For using SUSE Manager Proxy, you need an account at SUSE Customer Center (SCC) where your purchased products and product subscriptions are registered. Make sure you have the following subscriptions:
One or more subscriptions for SUSE Manager Proxy .
One or more subscriptions for SUSE Manager .
Subscriptions for the products on the client systems you want to register with SUSE Manager via SUSE Manager Proxy .
Subscriptions to client entitlements for the client system you want to register with SUSE Manager via SUSE Manager Proxy .
Network Time Protocol (NTP). The connection to the Web server via Secure Sockets Layer (SSL) requires correct time settings on the server, proxy and clients. For this reason, all systems must use NTP. For more information, see https://www.suse.com/documentation/sles-12/book_sle_admin/data/cha_netz_xntp.html.
Virtual Environments. The following virtual environments are supported:
For running SUSE Manager Proxy in virtual environments, use the following settings for the virtual machine (VM):
At least 1 GB of RAM
Bridged network
The following section will guide you through the installation and setup procedure.
SUSE Manager Proxy systems are registered as traditional clients or as Salt clients using a bootstrap script. Migrating a traditionally registered Proxy system to a Salt Proxy system is not possible. Re-install the Proxy if you want to switch to Salt.
Before you can select the correct child channels while creating the activation key, ensure you have completely downloaded the channels for SUSE Linux Enterprise 12 SP4.
Create an activation key based on the SUSE Linux Enterprise 12 SP4 base channel. For more information about activation keys, see Book “Getting Started”, Chapter 5 “Registering Clients”, Section 5.2 “Creating Activation Keys”.
From the Child Channels
listing select the SUSE Manager 3.2 Proxy child channel with the matching update channel (SUSE Manager Proxy-3.2-Pool
and SUSE-Manager-Proxy-3.2-Updates
).
These child channels are required for providing the proxy packages and updates.
For normal SLES clients, SLES12-SP4-Updates
plus SLE-Manager-Tools12-Pool
and SLE-Manager-Tools12-Updates
are mandatory.
Navigate to Management
) uncheck Bootstrap using Salt
.
Using Salt is supported since version 3.2.
For more information about bootstrap scripts, see Book “Getting Started”, Chapter 5 “Registering Clients”, Section 5.4 “Registering Traditional Clients”, Section 5.4.2 “Editing the Bootstrap Script”.
Create the SUSE Manager Tools Repository for bootstrapping, see Book “Getting Started”, Chapter 5 “Registering Clients”, Section 5.3 “Creating the SUSE Manager Tools Repository”.
Bootstrap the client with the bootstrap script. For more information, see Book “Getting Started”, Chapter 5 “Registering Clients”, Section 5.4 “Registering Traditional Clients”, Section 5.4.3 “Connecting Clients”.
In case of a Salt client, accept the key on the
› › page by clicking the check mark and it will appear in the › › .Check via SUSE Manager Proxy-3.2-Pool
and SUSE-Manager-Proxy-3.2-Updates
are selected.
A few more steps are still needed:
Install the patterns-suma_proxy
pattern (see Section 2.2.3, “Install the suma_proxy
pattern”)
Copy the SSL certificate and key from the server (see Section 2.2.4, “Copy Server Certificate and Key”)
Run configure-proxy.sh
(see Section 2.2.5, “Running configure-proxy.sh
”)
You will then be able to register your clients against the proxy using the Web UI or a bootstrap script as if it were a SUSE Manager server. For more information, see Section 2.2.6, “Registering Salt Clients via SUSE Manager Proxy”.
suma_proxy
pattern #On the server select the patterns-suma_proxy
package for installation, or make sure the suma_proxy
pattern is installed using the following command on the proxy as root:
zypper in -t pattern suma_proxy
The new salt-broker service will be automatically started at the end of the package installation. This service forwards the Salt interactions to the SUSE Manager server.
It is possible to arrange Salt proxies in a chain. In such a case, the upstream proxy is named “parent”.
Make sure the proxie’s TCP ports 4505
and 4506
are open and that the proxy can reach the SUSE Manager server (or another upstream proxy) on these ports.
The proxy will share some SSL information with the SUSE Manager server, so the next step is to copy the certificate and its key from the SUSE Manager server or the upstream proxy.
As root, enter the following commands on the proxy using your SUSE Manager server or chained proxy named PARENT
:
mkdir /root/ssl-build cd /root/ssl-build scp root@PARENT:/root/ssl-build/RHN-ORG-PRIVATE-SSL-KEY . scp root@PARENT:/root/ssl-build/RHN-ORG-TRUSTED-SSL-CERT . scp root@PARENT:/root/ssl-build/rhn-ca-openssl.cnf .
The SUSE Manager Proxy functionality is only supported if the SSL certificate was signed by the same CA as the SUSE Manager Server certificate. Using certificates signed by different CAs for Proxies and Server is not supported.
configure-proxy.sh
#The configure-proxy.sh
script will finalize the setup of your SUSE Manager Proxy.
Now execute the interactive configure-proxy.sh
script.
Pressing Enter without further input will make the script use the default values provided between brackets []
.
Here is some information about the requested settings:
A SUSE Manager parent can be either another proxy server or a SUSE Manager server.
A HTTP proxy enables your SUSE Manager proxy to access the Web. This is needed if direct access to the Web is prohibited by a firewall.
Normally, the correct value (3.0, 3.1, or 3.2) should be offered as a default.
An email address where to report problems.
For safety reasons, press Y
.
Answer N
.
This ensures using the new certificates that were copied previously from the SUSE Manager server.
The next questions are about the characteristics to use for the SSL certificate of the proxy. The organization might be the same organization that was used on the server, unless of course your proxy is not in the same organization as your main server.
The default value here is the proxy’s hostname.
The default value here is also the proxy’s hostname.
Further information attached to the proxy’s certificate. Beware the country code must be made of two upper case letters. For further information on country codes, refer to the online list of alpha-2 codes.
As the country code enter the country code set during the SUSE Manager installation.
For example, if your proxy is in US and your SUSE Manager in DE, you must enter DE
for the proxy.
Use this if your proxy server can be accessed through various DNS CNAME aliases. Otherwise it can be left empty.
Enter the password that was used for the certificate of your SUSE Manager server.
Use this option if you want to reuse a SSH key that was used for SSH-Push Salt minions on the server.
Accept default Y
.
Use same user name and password as on the SUSE Manager server.
SLP stands for Service Location Protocol.
If parts are missing, such as CA key and public certificate, the script prints commands that you must execute to integrate the needed files.
When the mandatory files are copied, re-run configure-proxy.sh
.
Also restart the script if a HTTP error was met during script execution.
configure-proxy.sh
activates services required by SUSE Manager Proxy, such as squid
, apache2
, salt-broker
, and jabberd
.
To check the status of the proxy system and its clients, click the proxy system’s details page on the Web UI (Connection
and Proxy
subtabs display the respective status information.
Proxy servers may now act as a broker and package cache for Salt minions. These minions can be registered with a bootstrap script like the traditional clients, or from the Web UI, or the command line.
Registering Salt clients via SUSE Manager Proxy from the Web UI
is done almost the same way as registering clients directly with the SUSE Manager server.
The difference is that you specify the name of the proxy in the Proxy
drop-box on the › › page.
Instead of the Web UI , you may use the command line to register a minion through a proxy. To do so, add the proxy FQDN as the master in the minions configuration file located at:
/etc/salt/minion
or alternatively:
/etc/salt/minion.d/NAME.conf
Add the FQDN to the minion file:
master: proxy123.example.com
Save and restart the salt-minion service with:
systemctl restart salt-minion
On the proxy, accept the new minion key with:
salt-key -a 'minion'
The minion will now connect to the proxy exclusively for Salt operations and normal HTTP package downloads.
Registering clients (either traditional or Salt) via SUSE Manager Proxy with a script is done almost the same way as registering clients directly with the SUSE Manager server. The difference is that you create the bootstrap script on the SUSE Manager Proxy with a command-line tool. The bootstrap script then deploys all necessary information to the clients. The bootstrap script refers some parameters (such as activation keys or GPG keys) that depend on your specific setup.
Create a client activation key on the SUSE Manager server using the Web UI. See Book “Getting Started”, Chapter 5 “Registering Clients”, Section 5.2 “Creating Activation Keys”.
On the proxy, execute the mgr-bootstrap
command-line tool as root.
If needed, use the additional command-line switches to tune your bootstrap script. An important option is --traditional
that enables to opt for a traditional client instead of a salt minion.
To view available options type mgr-bootstrap --help
from the command line:
# mgr-bootstrap --activation-keys=key-string
Optionally edit the resulting bootstrap script. Execute the bootstrap script on the clients as described in Book “Getting Started”, Chapter 5 “Registering Clients”, Section 5.4 “Registering Traditional Clients”, Section 5.4.3 “Connecting Clients”.
The clients are registered with the SUSE Manager Proxy specified in the bootstrap script.
Within the Web UI, standard proxy pages will show information about client, no matter whether minions or traditional clients.
A list of clients connected to a proxy can be located under
› › .A list of chained proxies for a minion can be located under
› ›If you decide to move any of your clients between proxies or the server you will need to repeat the registration process from scratch.
To enable PXE boot via a proxy server, additional software must be installed and configured on both the SUSE Manager server and the SUSE Manager Proxy server.
On the SUSE Manager server install susemanager-tftpsync :
zypper in susemanager-tftpsync
On the SUSE Manager Proxy server install susemanager-tftpsync-recv :
zypper in susemanager-tftpsync-recv
Run the configure-tftpsync.sh
setup script and enter the requested information:
configure-tftpsync.sh
It asks for hostname and IP address of the SUSE Manager server and of the proxy itself. Additionally, it asks for the tftpboot directory on the proxy.
On the SUSE Manager server, run configure-tftpsync.sh
to configure the upload to the SUSE Manager Proxy server:
configure-tftpsync.sh FQDN_of_Proxy_Server
To initiate an initial synchronization on the SUSE Manager Server run:
cobbler sync
Also can also be done after each a change within Cobbler that needs to be synchronized immediately. Otherwise Cobbler synchronization will also run automatically when needed. For more information about Cobbler, see Chapter 10, Cobbler.
SUSE Manager is using Cobbler to provide provisioning. PXE (tftp) is installed and activated by default. To enable systems to find the PXE boot on the SUSE Manager Proxy server add the following to the DHCP configuration for the zone containing the systems to be provisioned:
next-server: <IP_Address_of_SUSE_Manager_Proxy_Server> filename: "pxelinux.0"
The recommended order for migrations is to first migrate the server and then the proxies.
For the migration of traditionally managed proxies there are two possible approaches:
Existing SUSE Manager proxies may be upgraded to version 3.2 with YaST or zypper
migration.
Alternatively, the proxies may be replaced by new ones.
This section documents both approaches.
For migrating SUSE Manager 3 Proxy and earlier, see https://www.suse.com/documentation/suse-manager-3/book_suma_advanced_topics_31/data/sect1_chapter_book_suma_advanced_topics_31.html, Chapter "SUSE Manager 3.1 Proxy".
A SUSE Manager Proxy is dumb
in the sense that it does not contain any information about the clients which are connected to it.
A SUSE Manager Proxy can therefore be replaced by a new one.
Naturally, the replacement proxy must have the same name and IP address as its predecessor.
In order to replace a SUSE Manager Proxy and keeping the clients registered to the proxy leave the old proxy in SUSE Manager. Create a reactivation key for this system and then register the new proxy using the reactivation key. If you do not use the reactivation key, you will need to re-registered all the clients against the new proxy.
Before starting the actual migration procedure, save the data from the old proxy, if needed. Consider copying important data to a central place that can also be accessed by the new server:
Copy the scripts that are still needed.
Copy the activation keys from the previous server. Of course, it is always better to re-create the keys.
Shutdown the server.
Install a new SUSE Manager 3.2 Proxy, see Section 2.2, “Proxy Installation and Connecting Clients”.
In the SUSE Manager Web UI select the newly installed SUSE Manager Proxy and delete it from the systems list.
In the Web UI, create a reactivation key for the old proxy system: On the System Details tab of the old proxy click Reactivation
.
Then click Generate New Key
, and remember it (write it on a piece of paper or copy it to the clipboard).
For more information about reactivation keys, see Book “Reference Manual”, Chapter 7 “Systems”, Section 7.3 “System Details”, Section 7.3.1 “. › ”, Section 7.3.1.4 “ › › [Management]”
After the installation of the new proxy, perform the following actions (if needed):
Copy the centrally saved data to the new proxy system.
Install any other needed software.
If the proxy is also used for autoinstallation, do not forget to setup TFTP synchronization.
During the installation of the proxy, clients will not be able to reach the SUSE Manager server.
After a SUSE Manager Proxy system has been deleted from the systems list, all clients connected to this proxy will be (incorrectly) listed as directly connected
to the SUSE Manager server.
After the first successful operation on a client such as execution of a remote command or installation of a package or patch this information will automatically be corrected.
This may take a few hours.
In most situations upgrading the proxy will be your preferred solution as this retains all cached packages. Selecting this route saves time especially regarding proxies connected to SUSE Manager server via low-bandwith links. This upgrade is similar to a standard client migration.
Before successfully initializing the product migration, you first must make sure that the migration target channels are completely mirrored.
To upgrade to SUSE Manager 3.2 Proxy, you will require at least the SUSE Linux Enterprise Server 12 SP4
base channel with the SUSE Manager Proxy 3.2
child channel for your architecture.
Direct your browser to the SUSE Manager Web UI where your proxy is registered, and login.
On the
› › › page select your proxy server from the table.On the system’s detail page select the
› tab.From this page you will see installed products listed on your proxy client, and the available target products.
Select the required Target Products
. In this case, you will require SUSE Linux Enterprise Server 12 SP4
with SUSE Manager Proxy 3.2
.
Then confirm with
.From the Schedule Migration
menu, select the time and click .
Check the System Status
on the › when the migration is done.
Finally consider scheduling a reboot.
In highly secure network configurations you may wish to ensure your minions are connecting a specific master.
To setup validation from minion to master enter the masters fingerprint within the /etc/salt/minion
configuration file.
See the following procedure:
On the master enter the following command as root and note the fingerprint:
salt-key -F master
On your minion, open the minion configuration file located in /etc/salt/minion
.
Uncomment the following line and enter the masters fingerprint replacing the example fingerprint:
master_finger: 'ba:30:65:2a:d6:9e:20:4f:d8:b2:f3:a7:d4:65:11:13'
Restart the salt-minion service:
# systemctl restart salt-minion
For more information on configuring security from a minion see: https://docs.saltstack.com/en/latest/ref/configuration/minion.html
To sign repository metadata a custom GPG key is required. Create a new GPG key as root via the following steps.
$> gpg --gen-key Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) Your selection? 1 RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048) Requested keysize is 2048 bits Please specify how long the key should be valid. 0 = key does not expire <n> = key expires in n days <n>w = key expires in n weeks <n>m = key expires in n months <n>y = key expires in n years Key is valid for? (0) Key does not expire at all Is this correct? (y/N) y GnuPG needs to construct a user ID to identify your key. Real name: company sign key Email address: company@example.com Comment: You selected this USER-ID: "company sign key <company@example.com>" Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O You need a Passphrase to protect your secret key. gpg: key 607FABDB marked as ultimately trusted public and secret key created and signed. gpg: checking the trustdb gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u pub 2048R/607FABDB 2018-07-09 Key fingerprint = ACF5 4698 EC70 FD6A F8C9 942B A7A1 9301 607F ABDB uid [ultimate] company sign key <company@example.com> sub 2048R/A812FA62 2018-07-09
There are two configuration files which needs to be changed to enable signing of metadata.
/etc/rhn/signing.conf
to specify KEYID and PASSPHRASE
/etc/rhn/rhn.conf
to enable signing metadata
Example for /etc/rhn/signing.conf
:
KEYID="607FABDB" GPGPASS="MySecretPassword"
To enable signing of metadata please add the following in /etc/rhn/rhn.conf
:
sign_metadata = 1
All spacewalk services must be restarted after modifying 'rhn.conf'.
After enabling signing metadata, all metadata needs to be re-generated. This can be done with a small SQL script:
$> spacewalk-sql -i psql (9.6.9) Type "help" for help. susemanager=# insert into rhnRepoRegenQueue (id, CHANNEL_LABEL, REASON, FORCE) susemanager-# (select sequence_nextval('rhn_repo_regen_queue_id_seq'), C.label, 'changed signing of metadata', 'Y' from rhnChannel C); INSERT 0 40 susemanager=# \q $>
When this feature is enabled, all clients have to trust the new GPG key. If the new GPG key is not imported on the client, installation or updating of packages is not possible.
Export the GPG key and make it availabel in the 'pub/' directory.
# gpg --batch --export -a -o <output filename> <KEYID> $> gpg --batch --export -a -o /srv/www/htdocs/pub/company.key 607FABDB
Import the GPG key on all clients
This can be done using a remote command on all clients
# rpm --import http://<server.domain.top/pub/keyname.key $> rpm --import http://suma-refhead-srv.mgr.suse.de/pub/company.key
For salt managed systems it might make sense to use a state to trust GPG keys.
SUSE Manager delivers best-in-class Linux server management capabilities. For detailed information about the product please refer to the SUSEManager documentation.
The SUSE Manager Server and SUSE Manager Proxy images published by SUSE in selected Public Cloud environments are provided as Bring Your Own Subscription (BYOS) images. SUSE Manager Server instances need to be registered with the SUSE Customer Center (SCC). Subscriptions of SUSE Manager Proxy instances are handled through their parent SUSE Manager Server. After an instance is launched, SUSE Manager needs to be set up and configured following the procedure in the SUSE Manager documentation.
Select an instance size that meets the system requirements as documented in the SUSE Manager documentation.
Minimal main memory: >12G
The SUSE Manager setup procedure performs a Forward-confirmed reverse DNS lookup. This must succeed in order for the setup procedure to complete successfully and for SUSE Manager to operate as expected. Therefore it is important that the hostname and IP configuration be performed prior to running the SUSE Manager setup procedure.
SUSE Manager Server and SUSE Manager Proxy instances are expected to run in a network configuration that provides you control over DNS entries and that is shielded from the Internet at large. Within this network configuration DNS (Domain Name Service) resolution must be provided, such that hostname -f
returns the FQDN (Full Qualified Domain Name). The DNS resolution is not only important for the SUSE Manager Server procedure but is also important when clients are configured to be managed via SUSE Manager. Configuring DNS is Cloud Framework dependent, please refer to the cloud service provider documentation for detailed instructions.
Minimal free disk space for SUSE Manager 15G.
For Public Cloud instances we recommend that the repositories and the SUSE Manager Server database, and respectively the SUSE Manager Proxy squid cache, be located on an external virtual disk. The details for this setup are provided in Section 4.2.1, “Using Separate Storage Volume”.
Storing the database and the repositories on an external virtual disk ensures that the data is not lost should the instance need to be terminated for some reason.
Please ensure that the selected instance type matches the requirements listed above. Although we recommend that the database and the repositories are stored on a separate device it is still recommended to set the root volume size of the instance to 20G.
Run an instance of the SUSE Manager Server or SUSE Manager Proxy image as published by SUSE. The images are identifiable by the suse, manager, server or proxy, and byos keywords in each public cloud environment. The SUSE Manager instance must run in a network access restricted environment such as the private subnet of a VPC or with an appropriate firewall setting such that it can only be accessed by machines in the IP ranges you use. A generally accessible SUSE Manager instance violates the terms of the SUSE Manager EULA. Access to the web interface of SUSE Manager requires https.
SUSE Manager requires a stable and reliable hostname, changing the hostname can create errors. In this procedure, all commands need to be executed as the root user.
Disable hostname setup by editing the DHCP configuration file at /etc/sysconfig/network/dhcp
, and adding this line:
DHCLIENT_SET_HOSTNAME="no"
Set the hostname locally using the hostnamectl
command.
Ensure you use the system name, not the fully qualified domain name (FQDN).
The FQDN is set by the cloud framework; for example if cloud_instance.cloud.net is the FQDN, cloud_instance is the system name and cloud.net is the domain name:
hostnamectl set-hostname system_name
Create a DNS entry in your network environment for domain name resolution, or force correct resolution by editing the /etc/hosts
file:
$ echo "${local_address} suma.cloud.net suma" >> /etc/hosts
You can locate the local address by checking your public cloud web console, or from the command line using these commands:
Amazon EC2 instance:
$ ec2metadata --local-ipv4
Google Compute Engine:
$ gcemetadata --query instance --network-interfaces --ip
Microsoft Azure:
$ azuremetadata --internal-ip
Forcing DNS resolution by modifying the /etc/hosts
file will work correctly in most environments.
However, you will have to perform the same modification for any client that is to be managed with this SUSE Manager instance.
In most cases, it will be easier to manage the DNS resolution by creating a DNS entry in your network environment.
One other method for managing hostname resolution is by editing the /etc/resolv.conf
file.
Depending on the order of your setup, for example if you started the SUSE Manager instance prior to setting up DNS services the file may not contain the appropriate search directive.
Double check that the proper search directive exists in /etc/resolv.conf
.
In our example the directive would be search cloud.net.
If the directive is missing add it to the file.
For an update of the DNS records for the instance within the DNS service of your network environment, refer to the cloud service provider documentation for detailed instructions: * DNS setup on Amazon EC2 * DNS setup on Google Compute Engine * DNS setup on Microsoft Azure
If you run a SUSE Manager Server instance, you can run YaST after the instance is launched to ensure the external storage is attached and prepared according to Section 4.2.1, “Using Separate Storage Volume”, and the DNS resolution is set up as described:
$ /sbin/yast2 susemanager_setup
Note that the setup of SUSE Manager from this point forward does not differ from the documentation in the SUSE Manager Guide.
The SUSE Manager setup procedure in YaST is designed as a one pass process with no rollback or cleanup capability. Therefore, if the setup procedure is interrupted or ends with an error, it is not recommended that you repeat the setup process or attempts to manually fix the configuration. These methods are likely to result in a faulty SUSE Manager installation. If you experience errors during setup, start a new instance, and begin the setup procedure again on a clean system.
If you receive a message that there is not enough space available for setup, ensure that your root volume is at least 20GB and double check that the instructions in Section 4.2.1, “Using Separate Storage Volume” have been completed correctly.
SUSE Manager Server for the Public Cloud comes with a bootstrap data module pre-installed.
The bootstrap module contains optimized package lists for bootstrapping instances started from SUSE Linux Enterprise images published by SUSE.
If you intend to register such an instance, when you creatr the bootstrap repository run the mgr-create-bootstrap-repo
script using this command, to create a bootstrap repository suitable for SUSE Linux Enterprise 12 SP1 instances.
$ mgr-create-bootstrap-repo --datamodule=mgr_pubcloud_bootstrap_data -c SLE-12-SP1-x86_64
See Creating the SUSE Manager Tools Repository for more information on bootstrapping.
Prior to registering instances started from on demand images remove the following packages from the instance to be registered: … cloud-regionsrv-client … For Amazon EC2
+ regionServiceClientConfigEC2
+ regionServiceCertsEC2 … For Google Compute Engine
+ cloud-regionsrv-client-plugin-gce
+ regionServiceClientConfigGCE
+ regionServiceCertsGCE … For Microsoft Azure
+ regionServiceClientConfigAzure
+ regionServiceCertsAzure
+ If these packages are not removed it is possible to create interference between the repositories provided by SUSE Manager and the repositories provided by the SUSE operated update infrastructure.
+
Additionally remove the line from the /etc/hosts
file that contains the susecloud.net reference.
** If you run a SUSE Manager Proxy instance
+ Launch the instance, optionally with external storage configured. If you use external storage (recommended), prepare it according to Section 4.2.1, “Using Separate Storage Volume”. It is recommended but not required to prepare the storage before configuring SUSE Manager proxy, as the suma-storage script will migrate any existing cached data to the external storage. After preparing the instance, register the system with the parent SUSE Manager, which could be a SUSE Manager Server or another SUSE Manager Proxy. See the SUSE Manager Proxy Setup guide for details. Once registered, run
+
$ /usr/sbin/configure-proxy.sh
+ to configure your SUSE Manager Proxy instance. . After the completion of the configuration step, SUSE Manager should be functional and running. For SUSE Manager Server, the setup process created an administrator user with following user name:
+
* User name: admin
+
Amazon EC2 | Google Compute Engine | Microsoft Azure |
---|---|---|
|
|
|
+
The current value for the Instance-ID
or Instance-Name
in case of the Azure Cloud, can be obtained from the public cloud Web console or from within a terminal session as follows:
** Obtain instance id from within Amazon EC2 instance
+
$ ec2metadata --instance-id
Obtain instance id from within Google Compute Engine instance
$ gcemetadata --query instance --id
Obtain instance name from within Microsoft Azure instance
$ azuremetadata --instance-name
After logging in through the SUSE Manager Server Web UI, change the default password.
SUSE Manager Proxy does not have administration access to the Web UI. It can be managed through its parent SUSE Manager Server.
We recommend that the repositories and the database for SUSE Manager be stored on a virtual storage device. This best practice will avoid data loss in cases where the SUSE Manager instance may need to be terminated. These steps must be performed prior to running the YaST SUSE Manager setup procedure.
Provision a disk device in the public cloud environment, refer to the cloud service provider documentation for detailed instructions. The size of the disk is dependent on the number of distributions and channels you intend to manage with SUSE Manager. For sizing information refer to SUSE Manager sizing examples. A rule of thumb is 25 GB per distribution per channel.
Once attached the device appears as Unix device node in your instance. For the following command to work this device node name is required. In many cases the attached storage appears as /dev/sdb. In order to check which disk devices exists on your system, call the following command:
$ hwinfo --disk | grep -E "Device File:"
With the device name at hand the process of re-linking the directories in the filesystem SUSE Manager uses to store data is handled by the suma-storage script. In the following example we use /dev/sdb
as the device name.
$ /usr/bin/suma-storage /dev/sdb
After the call all database and repository files used by SUSE Manager Server are moved to the newly created xfs based storage.
In case your instance is a SUSE Manager Proxy, the script will move the Squid cache, which caches the software packages, to the newly created storage.
The xfs partition is mounted below the path /manager_storage
.
.
Create an entry in /etc/fstab (optional)
Different cloud frameworks treat the attachment of external storage devices differently at instance boot time. Please refer to the cloud environment documentation for guidance about the fstab entry.
If your cloud framework recommends to add an fstab entry, add the following line to the /etc/fstab file.
/dev/sdb1 /manager_storage xfs defaults,nofail 1 1
SUSE Manager cannot distinguish between different instances that use the same system ID. If you register a second instance with the same system ID as a previous instance, SUSE Manager will overwrite the original system data with the new system data. This can occur when you launch multiple instances from the same image, or when an image is created from a running instance. However, it is possible to clone systems and register them successfully by deleting the cloned system’s ID, and generating a new ID.
Clone the system using your preferred hypervisor’s cloning mechanism.
On the cloned system, change the hostname and IP addresses, and check the /etc/hosts
file to ensure you have the right host entries.
On traditional clients, stop the rhnsd
daemon with /etc/init.d/rhnsd stop
or, on newer systemd-based systems, with service rhnsd stop
.
For SLES 11 or Red Hat Enterprise Linux 5 or 6 clients, run these commands:
# rm /var/lib/dbus/machine-id # dbus-uuidgen --ensure
For SLES 12 or Red Hat Enterprise Linux 7 clients, run these commands:
# rm /etc/machine-id # rm /var/lib/dbus/machine-id # dbus-uuidgen --ensure # systemd-machine-id-setup
If you are using Salt, then you will also need to run these commands:
# service salt-minion stop # rm -rf /var/cache/salt
If you are using a traditional client, clean up the working files with:
# rm -f /etc/sysconfig/rhn/{osad-auth.conf,systemid}
The bootstrap should now run with a new system ID, rather than a duplicate.
If you are onboarding Salt minion clones, then you will also need to check if they have the same Salt minion ID.
You will need to delete the minion ID on each cloned minion, using the rm
command.
Each operating system type stores this file in a slightly different location, check the table for the appropriate command.
Minion ID File Location. Each operating system stores the minion ID file in a slightly different location, check the table for the appropriate command.
Operating System | Commands |
---|---|
SLES 12 |
|
SLES 11 |
|
SLES 10 |
|
Red Hat Enterprise Linux 5, 6, 7 |
|
Once you have deleted the minion ID file, re-run the bootstrap script, and restart the minion to see the cloned system in SUSE Manager with the new ID.
Apache and Tomcat Parameters should only be modified with support or consulting as these parameters can have severe and catastrophic performance impacts on your server when improperly adjusted. SUSE will not be able to provide support for catastrophic failure when these advanced parameters are modified without consultation. Tuning values for Apache httpd and Tomcat requires that you align these parameters with your server hardware. Furthermore testing of these altered values should be performed within a test environment.
The MaxClients
setting determines the number of Apache httpd processes, and thus limits the number of client connections that can be made at the same time (SUSE Manager uses the pre-fork MultiProcessing Modules).
The default value for MaxClients
in SUSE Manager is 150.
If you need to set the MaxClients
value greater than 150, Apache httpd’s ServerLimit setting and Tomcat’s maxThreads
must also be increased accordingly (see below).
The Apache httpd MaxClients
parameter must always be less or equal than Tomcat’s maxThreads
parameter!
If the MaxClients
value is reached while the software is running, new client connections will be queued and forced to wait, this may result in timeouts.
You can check the Apache httpd’s error.log
for details:
[error] Server reached MaxClients setting, consider increasing the MaxClients setting
The default MaxClients
parameter can be overridden on SUSE Manager by editing the server-tuning.conf
file located at /etc/apache2/
.
For example server-tuning.conf
file:
# prefork MPM <IfModule prefork.c> # number of server processes to start # http://httpd.apache.org/docs/2.2/mod/mpm_common.html#startservers StartServers 5 # minimum number of server processes which are kept spare # http://httpd.apache.org/docs/2.2/mod/prefork.html#minspareservers MinSpareServers 5 # maximum number of server processes which are kept spare # http://httpd.apache.org/docs/2.2/mod/prefork.html#maxspareservers MaxSpareServers 10 # highest possible MaxClients setting for the lifetime of the Apache process. # http://httpd.apache.org/docs/2.2/mod/mpm_common.html#serverlimit ServerLimit 150 # maximum number of server processes allowed to start # http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxclients MaxClients 150 # maximum number of requests a server process serves # http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxrequestsperchild MaxRequestsPerChild 10000 </IfModule>
Whenever the Apache httpd MaxClients
parameter is changed, the ServerLimit
must also be updated to the same value, or the change will have no effect.
Tomcat’s maxThreads
represents the maximum number of request processing threads that it will create.
This value determines the maximum number of simultaneous requests that it is able to handle.
All HTTP requests to the SUSE Manager server (from clients, browsers, XMLRPC API scripts, etc.) are handled by Apache httpd, and some of them are routed to Tomcat for further processing.
It is thus important that Tomcat is able to serve the same amount of simultaneous requests that Apache httpd is able to serve in the worst case.
The default value for SUSE Manager is 200 and should always be equal or greater than Apache httpd’s MaxClients
.
The maxThreads
value is located within the server.xml
file located at /etc/tomcat/
.
Example relevant lines in server.xml
:
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" URIEncoding="UTF-8" address="127.0.0.1" maxThreads="200" connectionTimeout="20000"/> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" URIEncoding="UTF-8" address="::1" maxThreads="200" connectionTimeout="20000"/>
When configuring Apache httpd’s MaxClients
and Tomcat’s maxThreads
parameters you should also take into consideration that each HTTP connection will need one or more database connections.
If the RDBMS is not able to serve an adequate amount of connections, issues will arise.
See the following equation for a rough calculation of the needed amount of database connections:
((3 * java_max) + apache_max + 60)
Where:
3 is the number of Java processes the server runs with pooled connections (Tomcat, Taskomatic and Search)
java_max is the maximum number of connections per Java pool (20 by default, changeable in /etc/rhn/rhn.conf
via the hibernate.c3p0.max_size parameter)
apache_max is Apache httpd’s MaxClients
60 is the maximum expected number of extra connections for local processes and other uses
In the following sections find considerations about a big scale deployment. In this context, a big scale compromises 1000 minions or more.
SUSE recommends the following in a big scale SUSE Manager deployment:
SUSE Manager servers should have at least 8 recent x86 cores, 32 GiB of RAM, and, most important, fast I/O devices such as at least an SSD (2 SSDs in RAID-0 are strongly recommended).
Proxies with many minions (hundreds) should have at least 2 recent x86 cores and 16 GiB of RAM.
Use one SUSE Manager Proxy per 500-1000 clients. Keep into account that download time depends on network capacity. Here is a rough example calculation with physical link speed of 1 GB/s:
400 Megabytes * 3000 / 119 Megabyte/s / 60 = 169 Minutes
This is:
Size of updates * Number of minions / Theoretical download speed / 60
Depending on hardware you can accept hundreds of minion keys.
Plan time for onboarding minions- at least one hour per 1000 minions.
It is not recommended onboarding more than approx. 1000 minions directly to the SUSE Manager server- proxies should be used instead. This is because every minion can use up to 3 TCP connections simultaneously, and too many TCP connections can cause performance issues.
If the following error appears in output of dmesg
, you probably have an excessive number of minions attached to a single SUSE Manager server or proxy for the ARP cache to contain all of their addresses:
kernel: neighbour table overflow
In that case, increase the ARP cache values via sysctl
, for example, by adding the following lines to /etc/sysctl.conf
:
net.ipv4.neigh.default.gc_thresh1 = 4096 net.ipv4.neigh.default.gc_thresh2 = 8192 net.ipv4.neigh.default.gc_thresh3 = 16384 net.ipv4.neigh.default.gc_interval = 60 net.ipv4.neigh.default.gc_stale_time = 120
Always start small and scale up gradually. Keep the server monitored in order to identify possible issues early.
SUSE proposes the following tuning settings in a big scale SUSE Manager deployment:
Increase the maximum Tomcat heap memory to face a potentially long queue of Salt return results. Set 8 GiB instead of the current default 1 GiB: parameter Xmx1G
in /etc/sysconfig/tomcat
(affects onboarding and Action execution).
Increase the number of Taskomatic workers, allowing to parallelize work on a high number of separate jobs. Set parameter org.quartz.threadPool.threadCount = 100
in /etc/rhn/rhn.conf
(affects onboarding and staging).
Allow Taskomatic to check for runnable jobs more frequently to reduce latency. Set parameter org.quartz.scheduler.idleWaitTime = 1000
in /etc/rhn/rhn.conf
(affects onboarding, staging and Action execution).
Increase Tomcat’s Salt return result workers to allow parallelizing work on a high number of Salt return results. Set parameter java.message_queue_thread_pool_size = 100
in /etc/rhn/rhn.conf
(affects patching).
Increase the number of PostgreSQL connections available to Java applications (Tomcat, Taskomatic) according to the previous parameters, otherwise extra workers will starve waiting for a connection. Set parameter hibernate.c3p0.max_size = 150
in /etc/rhn/rhn.conf
(affects all minion operations). Make sure enough PostgreSQL connections are configured before changing this parameter - refer to smdba system-check autotuning --help
to get automatic tuning of the PostgreSQL configuration file while changing the number of available connections. Additional manual tuning is usually not necessary but might be required depending on scale and exact use cases.
Increase Salt’s presence ping timeouts if responses might come back later than the defaults. Set parameters java.salt_presence_ping_timeout = 20
and java.salt_presence_ping_gather_job_timeout = 20
in /etc/rhn/rhn.conf
(affects all minion operations).
Increase the number of Salt master workers so that more requests can run in parallel (otherwise Tomcat and Taskomatic workers will starve waiting for the Salt API, and Salt will not be able to serve files timely). Set parameter worker_threads: 100
in /etc/salt/master.d/susemanager.conf
(affects onboarding and patching).
Increase this parameter further if file management states fail with the error "Unable to manage file: Message timed out"
Note that Salt master workers can consume significant amounts of RAM (typically about 70 MB per worker). It is recommended to keep usage monitored when increasing this value and to do so in relatively small increments (eg. 20) until failures are no longer produced.
Increase the maximum heap memory for the search daemon to be able to index many minions. Set 4 GiB instead of the current default 512 MB: add rhn-search.java.maxmemory=4096
in /etc/rhn/rhn.conf
(affects background indexing only).
Consider disabling Taskomatic jobs, especially if the provided functionality is not used:
Disable daily comparison of configuration files. Click
› , then the link, then the button and finally .Disable hourly synchronization of Cobbler files. Click
› , then the link, then the button and finally .Disable daily run of Gatherer and Subscription Matcher. Click
› , then the link, then the button and finally .Note that increasing the number of PostgreSQL connections will require more RAM, make sure the SUSE Manager server is monitored and swap is never used.
Also note the above settings should be regarded as guidelines-they have been tested to be safe but care should be exercised when changing them, and consulting support is highly recommended.
Version 1, 2018-03-20
The following topic provides an overview of the Salt SSH integration with SUSE Manager. This integration adds support for both ssh-push and ssh-push-tunnel connections for Salt minions.
Like the traditional stack, Salt minions may use an ssh connection to manage minions in place of Zeromq. This additional functionality is based on Salt SSH. Salt SSH enables you to execute salt commands and states via ssh without ever needing to install a salt-minion.
When the server executes an action on a minion an ssh connection is made on demand. This connection differs from the always-connected mode used by minions managed via Zeromq.
In SUSE Manager there are two ssh-push methods. In both use cases the server initiates an ssh connection to the minion in order to execute a Salt call using salt-ssh
. The difference in the two methods is how zypper/yum
initially connects to the server repositories:
zypper works as usual. The http(s) connection to the server is created directly.
The server creates an http(s) connection through an ssh tunnel. The http(s) connection initiated by zypper
is redirected through the tunnel by means of /etc/hosts
aliasing (see below). This method should be used for in place firewall setups that block http(s) connections from a minion to the server.
As with all Salt calls, SUSE Manager invokes salt-ssh
via the salt-api
.
Salt SSH relies on a Roster to obtain details such as hostname, ports, and ssh parameters of an ssh minion. SUSE Manager keeps these details in the database and makes them available to Salt by generating a temporary Roster file for each salt-ssh call. The location of the temporary Roster file is supplied to salt-ssh using the --roster-file= option
.
salt-ssh supports both password and key authentication. SUSE Manager uses both methods:
Password authentication is used only when bootstrapping. During the bootstrap step the key of the server is not authorized on the minion and therefore a password must be utilized for a connection to be made. The password is used transiently in a temporary roster file used for bootstrapping. This password is not stored.
All other common salt calls use key authentication. During the bootstrap step the ssh key of the server is authorized on the minion (added to a minion’s ~/.ssh/authorized_keys
file). Therefore subsequent calls no longer require a password.
The user for salt-ssh
calls made by SUSE Manager is taken from the ssh_push_sudo_user
setting. The default value of this is root
.
If the value of ssh_push_sudo_user
is not root
then the --sudo
options of salt-ssh
are used.
For the ssh-push-tunnel
method the traffic originating from zypper/yum has to be redirected through an ssh tunnel in order to bypass any firewall blocking a direct connection from the minion to the server.
This is achieved by using port 1233
in the repo url:
https://suma-server:1233/repourl...
Next alias the suma-server hostname to localhost in /etc/hosts:
127.0.0.1 localhost suma-server
The server creates a reverse ssh tunnel that connects localhost:1233
on the minion to suma-server:443
(ssh … -R 1233:suma-server:443
)
The result is that zypper/yum will actually connect to localhost:1233
which is then forwarded to suma-server:443
via the ssh tunnel.
This implies that zypper can contact the server only if the tunnel is open. This happens only when the servers executes an action on the minion. Manual zypper operations that require server connectivity are not possible in this case.
Prepare the Salt Roster for the call
Create remote port forwarding option IF the contact method is ssh-push-tunnel
Compute the ProxyCommand IF the minion is connected through a proxy
create Roster content:
hostname
user
port
remote_port_forwards
: The remote port forwarding ssh option
ssh_options
: other ssh options:
ProxyCommand
: If the minion connects through a SUMA proxy
timeout
: default 180s
minion_opts
:
master
: set to the minion id if contact method is ssh-push-tunnel
create a temporary Roster file
execute a synchronous salt-ssh call via the API
remove the temporary Roster file
Additional Information:
Bootstrapping minions uses salt-ssh under the hood. This happens for both regular and ssh minion.
The bootstrap sequence is a bit different than the regular salt-ssh call:
For a regular minion generate and pre-authorize the Salt key of the minion
If this is an ssh minion and a proxy was selected retrieve the ssh public key of the proxy using the mgrutil.chain_ssh_cmd runner. The runner copies the public key of the proxy to the server using ssh. If needed it can chain multiple ssh commands to reach the proxy across multiple hops.
Generate pillar data for bootstrap. Pillar data contains:
The hostname of the SUSE Manager server
The hostname of the minion to bootstrap
The connection type
The user for salt-ssh
If selected
The public minion key that was pre-authorized
The private minion key that was pre-authorized
The public ssh key that was retrieved from the proxy if the target is an ssh minion and a proxy was selected
If contact method is ssh-push-tunnel
fill the remote port forwarding option
if the minion connects through a SUMA proxy compute the ProxyCommand
option. This depends on the path used to connect to the proxy, e.g. server → proxy1 → proxy2 → minion
generate the roster for bootstrap into a temporary file. This contains:
hostname
user
password
port
remote_port_forwards
: the remote port forwarding ssh option
ssh_options
: other ssh options:
ProxyCommand
if the minion connects through a SUMA proxy
timeout
: default 180s
Via the Salt API execute:
salt-ssh --roster-file=<temporary_bootstrap_roster> minion state.apply certs,<bootstrap_state>`
<bootstrap_state> replaceable by bootstrap for regular minions or ssh_bootstrap for ssh minions.
The following image provides an overview of the Salt SSH bootstrap process.
Additional Information:
In order to make salt-ssh work with SUSE Managers proxies the ssh connection is chained from one server/proxy to the next. This is also know as multi-hop or multi gateway ssh connection.
In order to redirect the ssh connection through the proxies the ssh ProxyCommand
option is used. This options invokes an arbitrary command that is expected to connect to the ssh port on the target host. The standard input and output of the command is used by the invoking ssh process to talk to the remote ssh daemon.
The ProxyCommand basically replaces the TCP/IP connection. It doesn’t do any authorization, encryption, etc. Its role is simply to create a byte stream to the remote ssh daemon’s port.
E.g. connecting to a server behind a gateway:
In this example netcat (nc) is used to pipe port 22 of the target host into the ssh std i/o.
SUSE Manager initates the ssh connections as described above.
Additionally the ProxyCommand uses ssh to create a connection from the server to the minion through the proxies.
The following example uses the ProxyCommand option with two proxies and the usual ssh-push method:
# 1 /usr/bin/ssh -i /srv/susemanager/salt/salt_ssh/mgr_ssh_id -o StrictHostKeyChecking=no -o User=mgrsshtunnel proxy1 # 2 /usr/bin/ssh -i /var/lib/spacewalk/mgrsshtunnel/.ssh/id_susemanager_ssh_push -o StrictHostKeyChecking=no -o User=mgrsshtunnel -W minion:22 proxy2
connect from the server to the first proxy
connect from the first proxy to the second and forward standard input/output on the client to minion:22 using the -W option.
The following example uses the ProxyCommand option with two proxies over an ssh-push-tunnel connection:
# 1 /usr/bin/ssh -i /srv/susemanager/salt/salt_ssh/mgr_ssh_id -o User=mgrsshtunnel proxy1 # 2 /usr/bin/ssh -i /home/mgrsshtunnel/.ssh/id_susemanager_ssh_push -o User=mgrsshtunnel proxy2 # 3 /usr/bin/ssh -i /home/mgrsshtunnel/.ssh/id_susemanager_ssh_push -o User=root -R 1233:proxy2:443 minion # 4 /usr/bin/ssh -i /root/.ssh/mgr_own_id -W minion:22 -o User=root minion
Connect from the server to the first proxy.
Connect from the first proxy to the second.
connect from the second proxy to the minion and open an reverse tunnel (-R 1233:proxy2:443) from the minion to the https port on the second proxy.
Connect from the minion to itself and forward the std i/o of the server to the ssh port of the minion (-W minion:22). This is equivalent to ssh … proxy2 netcat minion 22 and is needed because ssh doesn’t allow to have both the reverse tunnel (-R 1233:proxy2:443) and the standard i/o forwarding (-W minion:22) in the same command.
Additional Information:
In order to connect to a proxy the parent server/proxy uses a specific user called mgrsshtunnel
.
The ssh config /etc/ssh/sshd_config
of the proxy will force the execution of `/usr/sbin/mgr-proxy-ssh-force-cmd
when mgrsshtunnel
connects.
`/usr/sbin/mgr-proxy-ssh-force-cmd
is a simple shell script that allows only the execution of scp
, ssh
or cat
commands.
The connection to the proxy or minion is authorized using ssh keys in the following way:
The server connects to the minion and to the first proxy using the key in `/srv/susemanager/salt/salt_ssh/mgr_ssh_id
.
Each proxy has its own key pair in `/home/mgrsshtunnel/.ssh/id_susemanager_ssh_push
.
Each proxy authorizes the key of the parent proxy or server.
The minion authorized its own key.
Additional Information:
For both ssh-push and ssh-push-tunnel the minion connects to the proxy to retrieve packages and repo data.
The difference is how the connection works:
In case of ssh-push, zypper or yum connect directly to the proxy using http(s). This assumes there’s not firewall between the minion and the proxy that would block http connections initiated by the minion.
In case of ssh-push-tunnel, the http connection to the proxy is redirected through a reverse ssh tunnel.
When the spacewalk-proxy
package is installed on the proxy the user mgrsshtunnel
is created if it doesn’t already exist.
During the initial configuration with configure-proxy.sh
the following happens:
Generate a ssh key pair or import an existing one
Retrieve the ssh key of the parent server/proxy in order to authorize it on the proxy
Configure the sshd
of the proxy to restrict the user mgrsshtunnel
This configuration is done by the mgr-proxy-ssh-push-init
script. This is called from configure-proxy.sh
and the user doesn’t have to invoke it manually.
Retrieving the parent key is done by calling an HTTP endpoint on the parent server or proxy.
First https//$PARENT/pub/id_susemanager_ssh_push.pub
is tried. If the parent is proxy this will return the public ssh key of that proxy.
If a 404
is received then it’s assumed the parent is a server not a proxy and https://$PARENT/rhn/manager/download/saltssh/pubkey
is tried.
If /srv/suseemanager/salt/salt_ssh/mgr_ssh_id.pub
already exists on the server it’s returned
If the public key doesn’t exist (because salt-ssh
has not been invoked yet) generate the key by calling the mgrutil.ssh_keygen
runner
salt-ssh generates a key pair the first time it is invoked in
/srv/suseemanager/salt/salt_ssh/mgr_ssh_id
. The previous sequence is needed in case a proxy is configured before salt-ssh was invoked for the first time.
Additional Information:
This chapter provides guidance on the setup of an Icinga server using SLES 12 SP4. For more information, see the Official Icinga documentation: http://docs.icinga.org/latest/en/.
Icinga packages are found in the SLE-Manager-Tools12-Updates x86_64
.
Do not install Icinga on the SUSE Manager server. Install Icinga on a stand-alone SUSE Linux Enterprise client.
Register the new client with SUSE Manager and subscribe it to the SUSE Manager client and update channels. SLES 12 and later include these channels by default.
Install the required Icinga packages on the new client:
zypper in icinga icinga-idoutils-pgsql postgresql postgresql94-server \ monitoring-plugins-all apache2
Edit the /etc/icinga/objects/contacts.cfg
file and add the email address which you will use for reciving alerts.
define contact { contact_name icingaadmin ; Short name of user use generic-contact ; Inherit default values alias Icinga Admin ; Full name of user email icinga@localhost ; <<*** CHANGE THIS TO YOUR EMAIL ADDRESS *** }
Enable postgres on boot and start the database:
systemctl enable postgresql.service systemctl start postgresql.service
Create the database and user for Icinga:
>psql postgres=# ALTER USER postgres WITH PASSWORD '<newpassword>'; postgres=# CREATE USER icinga; postgres=# ALTER USER icinga WITH PASSWORD 'icinga'; postgres=# CREATE DATABASE icinga; postgres=# GRANT ALL ON DATABASE icinga TO icinga; postgres=# \q exit
Adjust client authentication rights located in /var/lib/pgsql/data/pg_hba.conf
to match the following:
# TYPE DATABASE USER ADDRESS METHOD local icinga icinga trust local all postgres ident # "local" is for Unix domain socket connections only local all all trust # IPv4 local connections: host all all 127.0.0.1/32 trust # IPv6 local connections: host all all ::1/128 trust # Allow replication connections from localhost, by a user with the # replication privilege. #local replication postgres peer #host replication postgres 127.0.0.1/32 ident #host replication postgres ::1/128 ident
Ensure the local entries for icinga authentication settings are placed above all other local entries or you will get an error when configuring the database schema.
The entries in pg_hba.conf
are read from top to bottom.
Reload the Postgres service:
systemctl reload postgresql.service
Configure the database schema by running the following command in /usr/share/doc/packages/icinga-idoutils-pgsql/pgsql/
:
psql -U icinga -d icinga < pgsql.sql
Edit the following lines in /etc/icinga/ido2db.cfg
to switch from the default setting of mysql to postgres:
vi /etc/icinga/ido2db.cfg db_servertype=pgsql db_port=5432
Allow port 5432
through your firewall or you will not be able to access the WebGUI.
Create an icinga admin account for logging into the web interface:
htpasswd -c /etc/icinga/htpasswd.users icingaadmin
Enable and start all required services:
systemctl enable icinga.service systemctl start icinga.service systemctl enable ido2db.service systemctl start ido2db.service systemctl enable apache2.service systemctl start apache2.service
Login to the WebGUI at: http://localhost/icinga.
This concludes setup and initial configuration of Icinga.
The following sections provides an overview on monitoring your SUSE Manager server using Icinga.
You will add SUSE Manager as a host to Icinga and use a Nagios script/plugin to monitor running services via NRPE
(Nagios Remote Plugin Executor).
This section does not attempt to cover all monitoring solutions Icinga has to offer but should help you get started.
On your SUSE Manager server install the required packages:
zypper install nagios-nrpe susemanager-nagios-plugin insserv nrpe monitoring-plugins-nrpe
Modify the NRPE configuration file located at:
/etc/nrpe.cfg
Edit or add the following lines:
server_port=5666 nrpe_user=nagios nrpe_group=nagios allowed_hosts=Icinga.example.com dont_blame_nrpe=1 command[check_systemd.sh]=/usr/lib/nagios/plugins/check_systemd.sh $ARG1$
Variable definitions:
The variable server_port
defines the port nrpe will listen on.
The default port is 5666.
This port must be opened in your firewall.
The variables nrpe_user
and nrpe_group
control the user and group IDs that nrpe will run under. SUSE Manager
probes need access to the database, therefore nrpe requires access to database credentials stored in /etc/rhn/rhn.conf
.
There are multiple ways to achieve this.
You may add the user nagios
to the group www
(this is already done for other IDs such as tomcat); alternatively you can simply have nrpe run with the effective group ID www
in /etc/rhn/rhn.conf
.
The variable allowed_hosts
defines which hosts nrpe will accept connections from.
Enter the FQDN or IP address of your Icinga server here.
The use of variable dont_blame_nrpe
is unavoidable in this example.
nrpe
commands by default will not allow arguments being passed due to security reasons.
However, in this example you should pass the name of the host you want information on to nrpe as an argument.
This action is only possible when setting the variable to 1.
You need to define the command(s) that nrpe can run on SUSE Manager.
To add a new nrpe command specify a command call by adding command
followed by square brackets containing the actual nagios/icinga plugin name.
Next define the location of the script to be called on your SUSE Manager server.
Finally the variable $ARG1$
will be replaced by the actual host the Icinga server would like information about.
In the example above, the command is named check_systemd.sh
.
You can specify any name you like but keep in mind the command name is the actual script stored in /usr/lib/nagios/plugins/
on your SUSE Manager server.
This name must also match your probe definition on the Icinga server.
This will be described in greater detail later in the chapter. The check_systemd.sh script/plugin will also be provided in a later section.
One your configuration is complete load the new nrpe configuration as root with:
systemctl start nrpe
This concludes setup of nrpe.
To add a new host to Icinga create a host.cfg file for each host in /etc/icinga/conf.d/
.
For example susemanager.cfg
:
define host { host_name susemanager alias SUSE Manager address 192.168.1.1 check_period 24x7 check_interval 1 retry_interval 1 max_check_attempts 10 check_command check-host-alive }
Place the host IP address you want to add to Icinga on the Address
line.
After adding a new host restart Icinga as root to load the new configuation:
systemctl restart icinga
To add services for monitoring on a specific host define them by adding a service definition to your host.cfg file located in /etc/icinga/conf.d
.
For example you can monitor if a systems SSH service is running with the following service definition.
define service { host_name susemanager use generic-service service_description SSH check_command check_ssh check_interval 60 }
After adding any new services restart Icinga as root to load the new configuration:
systemctl restart icinga
You can create hostgroups to simplify and visualize hosts logically.
Create a hostgroups.cfg
file located in /etc/icinga/conf.d/
and add the following lines:
define hostgroup { hostgroup_name ssh_group alias ssh group members susemanager,mars,jupiter,pluto,examplehost4 }
The members
variable should contain the host_name
from within each host.cfg file you created to represent your hosts.
Every time you add an additional host by creating a host.cfg ensure you add the host_name to the members list of included hosts if you want it to be included within a logical hostgroup.
After adding several hosts to a hostgroup restart Icinga as root to load the new configuration:
systemctl restart icinga
You can create logical groupings of services as well.
For example if you would like to create a group of essential SUSE Manager services which are running define them within a servicegroups.cfg
file placed in /etc/icinga/conf.d/
:
#Servicegroup 1 define servicegroup { servicegroup_name SUSE Manager Essential Services alias Essential Services } #Servicegroup 2 define servicegroup { servicegroup_name Client Patch Status alias SUSE Manager 3 Client Patch Status }
Within each host’s host.cfg
file add a service to a servicegroup with the following variable:
define service { use generic-service service_description SSH check_command check_ssh check_interval 60 servicegroups SUSE Manager Essential Services }
All services that include the servicegroups
variable and the name of the servicegroup will be added to the specified servicegroup.
After adding services to a servicegroup restart Icinga as root to load the new configuation:
systemctl restart icinga
The following section provides information on monitoring uptime of critical SUSE Manager services.
As root create a new plugin file called check_systemd.sh
in /usr/lib/nagios/plugins/
on your SUSE Manager server:
vi /usr/lib/nagios/plugins/ check_systemd.sh
For this example you will use an opensource community script to monitor Systemd services. You may also wish to write your own.
#!/bin/bash # Copyright (C) 2016 Mohamed El Morabity <melmorabity@fedoraproject.com> # # This module is free software: you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation, either version 3 of the License, or (at your option) any later # version. # # This software is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. # # You should have received a copy of the GNU General Public License along with # this program. If not, see <http://www.gnu.org/licenses/>. PLUGINDIR=$(dirname $0) . $PLUGINDIR/utils.sh if [ $# -ne 1 ]; then echo "Usage: ${0##*/} <service name>" >&2 exit $STATE_UNKNOWN fi service=$1 status=$(systemctl is-enabled $service 2>/dev/null) r=$? if [ -z "$status" ]; then echo "ERROR: service $service doesn't exist" exit $STATE_CRITICAL fi if [ $r -ne 0 ]; then echo "ERROR: service $service is $status" exit $STATE_CRITICAL fi systemctl --quiet is-active $service if [ $? -ne 0 ]; then echo "ERROR: service $service is not running" exit $STATE_CRITICAL fi echo "OK: service $service is running" exit $STATE_OK
A current version of this script can be found at: https://github.com/melmorabity/nagios-plugin-systemd-service/blob/master/check_systemd_service.sh
The script used in this example is an external script and is not supported by SUSE.
Always check to ensure scripts are not modified or contain malicous code before using them on production machines.
Make the script executable:
chmod 755 check_systemd.sh
On your SUSE manager server add the following line to the nrpe.cfg
located at /etc/nrpe.cfg
:
# SUSE Manager Service Checks command[check_systemd.sh]=/usr/lib/nagios/plugins/check_systemd.sh $ARG1$
This will allow the Icinga server to call the plugin via nrpe on SUSE Manager.
Provide proper permissions by adding the script to the sudoers file:
visudo
nagios ALL=(ALL) NOPASSWD:/usr/lib/nagios/plugins/check_systemd.sh Defaults:nagios !requiretty
You can also add permissions to the entire plugin directory instead of allowing permissions for individual scripts:
nagios ALL=(ALL) NOPASSWD:/usr/lib/nagios/plugins/
On your Icinga server define the following command within /etc/icinga/objects/commands.cfg
:
define command { command_name check-systemd-service command_line /usr/lib/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c check_systemd.sh -a $ARG1$ }
Now you will add the following critical services to be montitored to your SUSE Manager host file:
auditlog-keeper.service
jabberd.service
spacewalk-wait-for-jabberd.service
tomcat.service
spacewalk-wait-for-tomcat.service
salt-master.service
salt-api.service
spacewalk-wait-for-salt.service
apache2.service
osa-dispatcher.service
rhn-search.service
cobblerd.service
taskomatic.service
spacewalk-wait-for-taskomatic.service
On your Icinga server add the following service blocks to your SUSE Manager host file susemanager.cfg
file located in /etc/icinga/conf.d/
.
(This configuration file was created in the previous section Adding a Host to Icinga.)
# Monitor Audit Log Keeper define service { use generic-service host_name susemanager check_interval 1 active_checks_enabled 1 service_description Audit Log Keeper Service servicegroups SUSE Manager Essential Services check_command check-systemd-service!auditlog-keeper.service } # Monitor Jabberd define service { use generic-service host_name susemanager check_interval 1 active_checks_enabled 1 service_description Jabberd Service servicegroups SUSE Manager Essential Services check_command check-systemd-service!jabberd.service } # Monitor Spacewalk Wait for Jabberd define service{ use generic-service host_name susemanager check_interval 1 active_checks_enabled 1 service_description Spacewalk Wait For Jabberd Service servicegroups SUSE Manager Essential Services check_command check-systemd-service!spacewalk-wait-for-jabberd.service } # Monitor Tomcat define service{ use generic-service host_name susemanager check_interval 1 active_checks_enabled 1 service_description Tomcat Service servicegroups SUSE Manager Essential Services check_command check-systemd-service!tomcat.service } # Monitor Spacewalk Wait for Tomcat define service{ use generic-service host_name susemanager check_interval 1 active_checks_enabled 1 service_description Spacewalk Wait For Tomcat Service servicegroups SUSE Manager Essential Services check_command check-systemd-service!spacewalk-wait-for-tomcat.service } # Monitor Salt Master define service{ use generic-service host_name susemanager check_interval 1 active_checks_enabled 1 service_description Salt Master Service servicegroups SUSE Manager Essential Services check_command check-systemd-service!salt-master.service } # Monitor Salt API define service{ use generic-service host_name susemanager check_interval 1 active_checks_enabled 1 service_description Salt API Service servicegroups SUSE Manager Essential Services check_command check-systemd-service!salt-api.service } # Monitor Spacewalk Wait for Salt define service{ use generic-service host_name susemanager check_interval 1 active_checks_enabled 1 service_description Spacewalk Wait For Salt Service servicegroups SUSE Manager Essential Services check_command check-systemd-service!spacewalk-wait-for-salt.service } # Monitor apache2 define service{ use generic-service host_name susemanager check_interval 1 active_checks_enabled 1 service_description Apache2 Service servicegroups SUSE Manager Essential Services check_command check-systemd-service!apache2.service } # Monitor osa dispatcher define service{ use generic-service host_name susemanager check_interval 1 active_checks_enabled 1 service_description Osa Dispatcher Service servicegroups SUSE Manager Essential Services check_command check-systemd-service!osa-dispatcher.service } # Monitor rhn search define service{ use generic-service host_name susemanager check_interval 1 active_checks_enabled 1 service_description RHN Search Service servicegroups SUSE Manager Essential Services check_command check-systemd-service!rhn-search.service } # Monitor Cobblerd define service{ use generic-service host_name susemanager check_interval 1 active_checks_enabled 1 service_description Cobblerd Service servicegroups SUSE Manager Essential Services check_command check-systemd-service!cobblerd.service } # Monitor taskomatic define service{ use generic-service host_name susemanager check_interval 1 active_checks_enabled 1 service_description Taskomatic Service servicegroups SUSE Manager Essential Services check_command check-systemd-service!taskomatic.service } # Monitor wait for taskomatic define service{ use generic-service host_name susemanager check_interval 1 active_checks_enabled 1 service_description Spacewalk Wait For Taskomatic Service servicegroups SUSE Manager Essential Services check_command check-systemd-service!spacewalk-wait-for-taskomatic.service }
Each of these service blocks will be passed as the check-systemd-service!$ARG1$ variable to SUSE manager server via nrpe.
You probably noticed the servicegroups parameter was also included.
This adds each service to a servicegroup and has been defined in a servicesgroups.cfg
file located in /etc/icinga/conf.d/
:
define servicegroup { servicegroup_name SUSE Manager Essential Services alias Essential Services }
Restart Icinga:
systemctl restart icinga
You can use the check_suma_patches
plugin to check if any machines connected to SUSE Manager as clients require a patch or an update.
The following procedure will guide you through the setup of the check_suma_patches plugin.
On your SUSE Manager server open /etc/nrpe.cfg
and add the following lines:
# SUSE Manager check_patches command[check_suma_patches]=sudo /usr/lib/nagios/plugins/check_suma_patches $ARG1$
On your Icinga server open /etc/icinga/objects/commands.cfg
and define the following command:
define command{ command_name check_suma command_line /usr/lib/nagios/plugins/check_nrpe -H 192.168.1.1 -c $ARG1$ -a $HOSTNAME$ }
On your Icinga server open any of your SUSE Manager client host configration files located at /etc/icinga/conf.d/clients.cfg
and add the following service definition:
define service { use generic-service host_name client-hostname service_description Available Patches for client-host_name servicegroups Client Patch Status check_command check_suma!check_suma_patches }
In the above service definition notice that this host is included in the servicegroup labeled Client Patch Status.
Add the following servicegroup definition to /etc/icinga/conf.d/servicegroups.cfg
to create a servicegroup:
define servicegroup { servicegroup_name Client Patch Status alias SUSE Manager 3 Client Patch Status }
OK:System is up to date
Warning: At least one patch or package update is available
Critical:At least one security/critical update is available
Unspecified:The host cannot be found in the SUSE Manager database or the host name is not unique
This concludes setup of the check_suma_patches
plugin.
You can use the check_suma_lastevent
plugin to display the last action executed on any host.
The following procedure will guide you through the setup of the check_suma_patches plugin.
On your SUSE Manager server open /etc/nrpe.cfg
and add the following lines:
# Check SUSE Manager Hosts last events command[check_events]=sudo /usr/lib/nagios/plugins/check_suma_lastevent $ARG1$
On the Icinga server open /etc/icinga/objects/commands.cfg
and add the following lines:
define command { command_name check_events command_line /usr/lib/nagios/plugins/check_nrpe -H manager.suse.de -c $ARG1$ -a $HOSTNAME$ }
On your Icinga server add the following line to a host.cfg service definition:
define service{ use generic-service host_name hostname service_description Last Events check_command check_events!check_suma_lastevent }
Status will be reported as follows:
OK:Last action completed successfully
Warning: Action is currently in progress
Critical:Last action failed
Unspecified:The host cannot be found in the SUSE Manager database or the host name is not unique
This concludes setup of the check_suma_lastevent
plugin.
For more information, see Icinga’s official documentation located at http://docs.icinga.org/latest/en.
For some excellent time saving configuration tips and tricks not covered in this guide, see the following section located within the official documentation: http://docs.icinga.org/latest/en/objecttricks.html
SUSE Manager enables system administrators to build containers, systems, and virtual images. SUSE Manager helps with creating Image Stores and managing Image Profiles.
SUSE Manager supports two distinct build types:
Dockerfile-for more information, see Section 8.2, “Container Images”
Kiwi image system-for more information, see Section 8.3, “OS Images”
The containers feature is available for Salt minions running SUSE Linux Enterprise Server 12 or later. Before you begin, ensure your environment meets these requirements:
An existing external GitHub or internal GitLab repository containing a Dockerfile and configuration scripts (example scripts are provided in this chapter).
A properly configured image registry.
If you require a private image registry you can use an open source solution such as Portus
.
For additional information on setting up Portus as a registry provider, see the Portus Documentation.
For more information on Containers or CaaS Platform, see:
To build images with SUSE Manager, you will need to create and configure a build host. Container build hosts are Salt minions running SUSE Linux Enterprise 12 or later. This section guides you though the initial configuration for a build host.
From the SUSE Manager Web UI perform these steps to configure a build host.
Select a minion to be designated as a build host from the
› page.From the System Details
page for the selected minion assign the containers modules by going to › and enabling SLE-Module-Containers12-Pool
and SLE-Module-Containers12-Updates
. Confirm by clicking .
From the Add-on System Type
and Container Build Host
and confirm by clicking .
Install all required packages by applying Highstate
. From the system details page select › and click Apply Highstate
.
Alternatively, apply Highstate from the SUSE Manager Server command line:
salt '$your_minion' state.highstate
Create an activation key associated with the channel that your images will use.
To build containers, you will need an activation key that is associated with a channel other than "SUSE Manager Default".
Select
› › .Click
.Enter a Description
and a Key
name. Use the drop-down menu to select the Base Channel
to associate with this key.
Confirm with
.For more information, see Book “Best Practices”, Chapter 7 “Activation Key Management”.
Define a location to store all of your images by creating an Image Store.
Select
› › .Click Create
to create a new store.
SUSE Manager currently provides support only for the Registry
store type. Define a name for the image store in the Label
field.
Provide the path to your image registry by filling in the URI
field, as a fully qualified domain name (FQDN) for the container registry host (whether internal or external).
registry.example.com
The Registry URI can also be used to specify an image store on a used registry.
registry.example.com:5000/myregistry/myproject
Click
to add the new image store.Manage Image Profiles from the Image Profile
page.
To create an image profile select
› and click .Provide a name for the image profile by filling in the
field.Only lowercase characters are permitted in container labels.
If your container image tag is in a format such as myproject/myimage
, make sure your image store registry URI contains the /myproject
suffix.
Use a Dockerfile
as the Image Type
Use the drop-down menu to select your registry from the Target Image Store
field.
Enter a Github or Gitlab repository URL (http, https, or token authentication) in the Path
field using one of the following formats:
Github single user project repository
https://github.com/USER/project.git#branchname:folder
Github organization project repository
https://github.com/ORG/project.git#branchname:folder
Github token authentication:
If your git repository is private and not publicly accessible, you need to modify the profile’s git URL to include authentication. Use this URL format to authenticate with a Github token:
https://USER:<AUTHENTICATION_TOKEN>@github.com/USER/project.git#master:/container/
Gitlab single user project repository
https://gitlab.example.com/USER/project.git#master:/container/
Gitlab groups project repository
https://gitlab.example.com/GROUP/project.git#master:/container/
Gitlab token authentication If your git repository is private and not publicly accessible, you need to modify the profile’s git URL to include authentication. Use this URL format to authenticate with a Gitlab token:
https://gitlab-ci-token:<AUTHENTICATION_TOKEN>@gitlab.example.com/USER/project.git#master:/container/
If a branch is not specified, the master
branch will be used by default.
If a folder
is not specified the image sources (Dockerfile
sources) are expected to be in the root directory of the Github or Gitlab checkout.
Select an Activation Key
. Activation Keys ensure that images using a profile are assigned to the correct channel and packages.
When you associate an activation key with an image profile you are ensuring any image using the profile will use the correct software channel and any packages in the channel.
Click the
button.This section contains an example Dockerfile. You specify a Dockerfile that will be used during image building when creating an image profile. A Dockerfile and any associated scripts should be stored within an internal or external Github or Gitlab repository:
The Dockerfile provides access to a specific repository version served by SUSE Manager.
This example Dockerfile is used by SUSE Manager to trigger a build job on a build host minion.
The ARG
parameters ensure that the image that is built is associated with the desired repository version served by SUSE Manager.
The ARG
parameters also allow you to build image versions of SUSE Linux Enterprise Server which may differ from the version of SUSE Linux Enterprise Server used by the build host itself.
For example: The ARG repo
parameter and the echo
command pointing to the repository file, creates and then injects the correct path into the repository file for the desired channel version.
The repository version is determined by the activation key that you assigned to your image profile.
FROM registry.example.com/sles12sp2 MAINTAINER Tux Administrator "tux@example.com" ### Begin: These lines Required for use with {productname} ARG repo ARG cert # Add the correct certificate RUN echo "$cert" > /etc/pki/trust/anchors/RHN-ORG-TRUSTED-SSL-CERT.pem # Update certificate trust store RUN update-ca-certificates # Add the repository path to the image RUN echo "$repo" > /etc/zypp/repos.d/susemanager:dockerbuild.repo ### End: These lines required for use with {productname} # Add the package script ADD add_packages.sh /root/add_packages.sh # Run the package script RUN /root/add_packages.sh # After building remove the repository path from image RUN rm -f /etc/zypp/repos.d/susemanager:dockerbuild.repo
This is an example add_packages.sh
script for use with your Dockerfile:
#!/bin/bash set -e zypper --non-interactive --gpg-auto-import-keys ref zypper --non-interactive in python python-xml aaa_base aaa_base-extras net-tools timezone vim less sudo tar
To inspect images and provide the package and product list of a container to the SUSE Manager Web UI you will need to install python and python-xml within the container. If these packages remain uninstalled, your images will still build, but the package and product list will be unavailable from the Web UI.
There are two ways to build an image. You can select
› from the left navigation bar, or click the build icon in the › list.For this example select
› .Add a different tag name if you want a version other than the default latest
(only relevant to containers).
Select Build Profile
and Build Host
.
Notice the Profile Summary
to the right of the build fields.
When you have selected a build profile, detailed information about the selected profile will be displayed in this area.
To schedule a build click the
button.You can import and inspect arbitrary images.
Select Import
dialog.
Once it has processed, the imported image will be listed on the Images
page.
From Import Image
dialog.
In the Import Image
dialog complete these fields:
The registry from where the image will be pulled for inspection.
The name of the image in the registry.
The version of the image in the registry.
The build host that will pull and inspect the image.
The activation key that provides the path to the software channel that the image will be inspected with.
For confirmation, click
.The entry for the image is created in the database, and an Inspect Image
action on SUSE Manager is scheduled.
Once it has been processed, you can find the imported image in the Images
list.
It has a different icon in the Build
column, to indicate that the image is imported (see screenshot).
The status icon for the imported image can also be seen on the Overview
tab for the image.
These are some known problems that you might encounter when working with images:
HTTPS certificates to access the registry or the git repositories should be deployed to the minion by a custom state file.
SSH git access using Docker is currently unsupported. You may test it, but SUSE will not provide support.
If the python and python-xml packages are not installed in your images during the build process, Salt cannot run within the container and reporting of installed packages or products will fail.
This will result in an unknown
update status.
OS images are built by the Kiwi image system. They can be of various types: PXE, QCOW2, LiveCD images, and others.
For more information about the Kiwi build system, see the Kiwi documentation.
OS image building is disabled by default. You can enable it in /etc/rhn/rhn.conf
by setting:
java.kiwi_os_image_building_enabled = true
Restart the Spacewalk service to pick up the changes:
spacewalk-service restart
The Kiwi image building feature is available for Salt minions running SUSE Linux Enterprise Server 12. It is currently not supported to build SUSE Linux Enterprise 15 images.
Kiwi image configuration files and configuration scripts must be accessible in one of these locations:
Git repository
HTTP hosted tarball
Local build host directory
Example scripts are provided in the following sections.
Hosts running OS images built with Kiwi need at least 1 GB of RAM. Disk space depends on the actual size of the image. For more information, see the documentation of the underlying system.
To build all kinds of images with SUSE Manager, create and configure a build host. OS image build hosts are Salt minions running SUSE Linux Enterprise Server 12 (SP3 or later). This procedure will guide you though the initial configuration for a build host.
From the SUSE Manager Web UI perform these steps to configure a build host:
Select a minion that will be designated as a build host from the
› › page.From the Add-on System Type:
OS Image Build Host
and confirm with .
From the SLE-Manager-Tools12-Pool
and SLE-Manager-Tools12-Updates
(or a later version).
Schedule and click .
Install Kiwi and all required packages by applying Highstate. From the system details page select
› and click . Alternatively, apply Highstate from the SUSE Manager Server command line:salt '$your_minion' state.highstate
SUSE Manager Web Server Public Certificate RPM. Build host provisioning copies the SUSE Manager certificate RPM to the build host. This certificate is used for accessing repositories provided by SUSE Manager.
The certificate is packaged in RPM by the mgr-package-rpm-certificate-osimage
package script.
The package script is called automatically during a new SUSE Manager installation.
When you upgrade the spacewalk-certs-tools
package, the upgrade scenario will call the package script using the default values.
However if the certificate path was changed or unavailable, you will need to call the package script manually using --ca-cert-full-path <path_to_certificate>
after the upgrade procedure has finished.
Package script call example.
/usr/sbin/mgr-package-rpm-certificate-osimage --ca-cert-full-path /root/ssl-build/RHN-ORG-TRUSTED-SSL-CERT
The RPM package with the certificate is stored in a salt-accessible directory such as /usr/share/susemanager/salt/images/rhn-org-trusted-ssl-cert-osimage-1.0-1.noarch.rpm
.
The RPM package with the certificate is provided in the local build host repository /var/lib/Kiwi/repo
.
Make sure your build source Kiwi configuration contains rhn-org-trusted-ssl-cert-osimage
as a required package in the bootstrap
section.
config.xml.
... <packages type="bootstrap"> ... <package name="rhn-org-trusted-ssl-cert-osimage" bootinclude="true"/> </packages> ...
Create an activation key associated with the channel that your images will use. Activation keys are mandatory for OS Image building.
To build OS Images, you will need an activation key that is associated with a channel other than "SUSE Manager Default".
In the Web UI, select
› › .Click Create Key
.
Enter a Description
, a Key
name, and use the drop-down box to select a Base Channel
to associate with the key.
Confirm with
.For more information, see Book “Best Practices”, Chapter 7 “Activation Key Management”.
OS images can require a significant amount of storage space.
Therefore, we recommended that the OS image store is located on a partition of its own or on a btrfs subvolume, separate from the root partition.
By default, the image store will be located at /srv/www/os-images
.
Image stores for Kiwi build type, used to build system, virtual and other images, are not supported yet.
Images are always stored in /srv/www/os-images/<organization id>
and are accessible via HTTP/HTTPS https://<susemanager_host>/os-images/<organization id>
Manage Image Profiles using the Web UI.
To create an image profile select from
› › › and click .In the Label
field, provide a name for the Image Profile
.
Use Kiwi
as the Image Type
.
Image store is automatically selected.
Enter a Config URL
to the directory containing the Kiwi configuration files:
Git URI
HTTPS tarball
Path to build host local directory
Select an Activation Key
. Activation keys ensure that images using a profile are assigned to the correct channel and packages.
When you associate an activation key with an image profile you are ensuring any image using the profile will use the correct software channel and any packages in the channel.
Confirm with the
button.Git/HTTP(S) URL to the repository
URL to the Git repository containing the sources of the image to be built. Depending on the layout of the repository the URL can be:
https://github.com/SUSE/manager-build-profiles
You can specify a branch after the #
character in the URL.
In this example, we use the master
branch:
https://github.com/SUSE/manager-build-profiles#master
You can specify a directory that contains the image sources after the :
character.
In this example, we use OSImage/POS_Image-JeOS6
:
https://github.com/SUSE/manager-build-profiles#master:OSImage/POS_Image-JeOS6
HTTP(S) URL to the tarball
URL to the tar archive, compressed or uncompressed, hosted on the webserver.
https://myimagesourceserver.example.org/MyKiwiImage.tar.gz
Path to the directory on the build host
Enter the path to the directory with the Kiwi build system sources. This directory must be present on the selected build host.
/var/lib/Kiwi/MyKiwiImage
Kiwi sources consist at least of config.xml
.
Usually config.sh
and images.sh
are present as well.
Sources can also contain files to be installed in the final image under the root
subdirectory.
For information about the Kiwi build system, see the Kiwi documentation.
SUSE provides examples of fully functional image sources at the SUSE/manager-build-profiles public GitHub repository.
Example of JeOS config.xml.
<?xml version="1.0" encoding="utf-8"?> <image schemaversion="6.1" name="POS_Image_JeOS6"> <description type="system"> <author>Admin User</author> <contact>noemail@example.com</contact> <specification>SUSE Linux Enterprise 12 SP3 JeOS</specification> </description> <preferences> <version>6.0.0</version> <packagemanager>zypper</packagemanager> <bootsplash-theme>SLE</bootsplash-theme> <bootloader-theme>SLE</bootloader-theme> <locale>en_US</locale> <keytable>us.map.gz</keytable> <timezone>Europe/Berlin</timezone> <hwclock>utc</hwclock> <rpm-excludedocs>true</rpm-excludedocs> <type boot="saltboot/suse-SLES12" bootloader="grub2" checkprebuilt="true" compressed="false" filesystem="ext3" fsmountoptions="acl" fsnocheck="true" image="pxe" kernelcmdline="quiet"></type> </preferences> <!-- CUSTOM REPOSITORY <repository type="rpm-dir"> <source path="this://repo"/> </repository> --> <packages type="image"> <package name="patterns-sles-Minimal"/> <package name="aaa_base-extras"/> <!-- wouldn't be SUSE without that ;-) --> <package name="kernel-default"/> <package name="salt-minion"/> ... </packages> <packages type="bootstrap"> ... <package name="sles-release"/> <!-- this certificate package is required to access {productname} repositories and is provided by {productname} automatically --> <package name="rhn-org-trusted-ssl-cert-osimage" bootinclude="true"/> </packages> <packages type="delete"> <package name="mtools"/> <package name="initviocons"/> ... </packages> </image>
There are two ways to build an image using the Web UI. Either select
› › , or click the build icon in the › › list.Select
› › .Add a different tag name if you want a version other than the default latest
(applies only to containers).
Select the Image Profile
and a Build Host
.
A Profile Summary
is displayed to the right of the build fields.
When you have selected a build profile detailed information about the selected profile will show up in this area.
To schedule a build, click the
button.After the image is successfully built, the inspection phase begins. During the inspection phase SUSE Manager collects information about the image:
List of packages installed in the image
Checksum of the image
Image type and other image details
If the built image type is PXE
, a Salt pillar will also be generated.
Image pillars are stored in the /srv/susemanager/pillar_data/images/
directory and the Salt subsystem can access details about the generated image.
Details include where the pillar is located and provided, image checksums, information needed for network boot, and more.
The generated pillar is available to all connected minions.
Building an image requires of several dependent steps. When the build fails, investigation of salt states results can help you to identify the source of the failure. Usual checks when the build fails:
The build host can access the build sources
There is enough disk space for the image on both the build host and the SUSE Manager server
The activation key has the correct channels associated with it
The build sources used are valid
The RPM package with the SUSE Manager public certificate is up to date and available at /usr/share/susemanager/salt/images/rhn-org-trusted-ssl-cert-osimage-1.0-1.noarch.rpm
.
For more on how to refresh a public certificate RPM, see Section 8.3.3, “Creating a Build Host”.
The section contains some known issues when working with images.
HTTPS certificates used to access the HTTP sources or Git repositories should be deployed to the minion by a custom state file, or configured manually.
Importing Kiwi-based images is not supported.
To list images available for building select
› › . A list of all images will be displayed.Displayed data about images includes an image Name
, its Version
and the build Status
.
You will also see the image update status with a listing of possible patch and package updates that are available for the image.
Clicking the
button on an image will provide a detailed view including an exact list of relevant patches and a list of all packages installed within the image.The patch and the package list is only available if the inspect state after a build was successful.
The prerequisites listed below should be met before proceeding.
At least one Kubernetes or _SUSE CaaS Platform _ cluster available on your network
SUSE Manager configured for container management
Required channels are present, a registered build host available etc.
virtual-host-gatherer-Kubernetes package installed on your SUSE Manager server
Kubernetes version 1.5.0 or higher. Alternatively use SUSE CaaS Platform (SUSE CaaS Platform includes Kubernetes 1.5.0 by default)
Docker version 1.12 or higher on the container build host
To enable all Kubernetes related features within the Web UI, the virtual-host-gatherer-Kubernetes package must be installed.
Kubernetes clusters are registered with SUSE Manager as virtual host managers
.
Registration and authorization begins with importing a kubeconfig
file using Kubernetes official command line tool kubectl
.
Select
› from the navigation menu.Expand the Create
dropdown in the upper right corner of the page and select .
Input a label for the new Virtual Host Manager.
Select the kubeconfig
file which contains the required data for the Kubernetes cluster.
Select the correct context for the cluster, as specified in the kubeconfig file.
Click Create
.
Select
› from the navigation menu.Select the desired Kubernetes cluster to view it.
Node data is not refreshed during registration.
To refresh node data, click on Schedule refresh data
.
Refresh the browser. If the node data is not available wait a few moments and try again.
See the following steps to find runtime data for images.
Select
› from the navigation menu.In the image list table, take notice of the new runtime columns.
These are labeled: Revision
, Runtime
and Instances
.
Initially these columns will not provide useful data.
Revision
: An artificial sequence number which increments on every rebuild for manager-built images, or on every reimport for externally built images.
Runtime
: Overall status of the running instances of the image throughout the registered clusters.
The status can be one of the following:
All instances are consistent with SUSE Manager: All the running instances are running the same build of the image as tracked by SUSE Manager.
Outdated instances found: Some of the instances are running an older build of the image. A redeploy of the image into the pod may be required.
No information: The checksum of the instance image does not match the image data contained in SUSE Manager. A redeploy of the image into the pod may be required.
Instances
: Number of instances running this image across all the clusters registered in SUSE Manager.
A breakdown of numbers can be seen by clicking on the pop-up icon next to the number.
The following steps will help you build an image for deployment in Kubernetes.
Under
› , create an image store.Under
› , create an image profile (with a Dockerfile which is suitable to deploy to Kubernetes).Under
› , build an image with the new profile and wait for the build to finish.Deploy the image into one of the registered Kubernetes clusters (via kubectl
).
Notice the updated data in Runtime
and Instances
columns in the respective image row.
The following steps will guide you through importing a previously deployed image in Kubernetes.
Select an image that has already been deployed to any of your registered Kubernetes clusters.
Add the registry owning the image to SUSE Manager as an image store.
Select Import
from the top-right corner, fill in the form fields and click Import
.
Notice the updated data in Runtime
and Instances
columns in the respective image row.
The following steps will help you find additional runtime data.
Select to Details
button on the right end of a row which has running instances.
Under the Overview
tab, notice the data in Runtime
and Instances
fields under Image Info
section.
Select the Runtime
tab.
Here is a breakdown of the Kubernetes pods running this image in all the registered clusters including the following data:
Pod name
Namespace which the pod resides in
The runtime status of the container in the specific pod. Status icons are explained in the preceeding example.
The following steps will guide you through rebuilding an image which has been deployed to a Kubernetes cluster.
Go to
› , click the Details button on the right end of a row which has running instances. The image must be manager-built.Click the Rebuild
button located under the Build Status
section and wait for the build to finish.
Notice the change in the Runtime
icon and title, reflecting the fact that now the instances are running a previous build of the image.
Currently, only kubeconfig files containing all embedded certificate data may be used with SUSE Manager
The API calls from SUSE Manager are:
GET /api/v1/pods
GET /api/v1/nodes
According to this list, the minimum recommended permissions for SUSE Manager should be as follows:
A ClusterRole to list all the nodes:
resources: ["nodes"] verbs: ["list"]
A ClusterRole to list pods in all namespaces (role binding must not restrict the namespace):
resources: ["pods"] verbs: ["list"]
Due to a a 403 response from /pods, the entire cluster will be ignored by SUSE Manager.
For more information on working with RBAC Authorization see: https://kubernetes.io/docs/admin/authorization/rbac/
SUSE Manager features the Cobbler server, which allows administrators to centralize system installation and provisioning infrastructure. Cobbler is an installation server that provides various methods of performing unattended system installations. It can be used on server, workstation, or guest systems, in full or para-virtualized environments.
Cobbler offers several tools for pre-installation guidance, automated installation file management, installation environment management, and more. This section explains some of the supported features of Cobbler, including:
Installation environment analysis using the cobbler check
command
Multi-site installation server configuration using the cobbler replicate
command
Virtual machine guest installation automation with the koan
client-side tool
Building installation ISOs with PXE-like menus using the cobbler buildiso
command (for SUSE Manager systems with x86_64 architecture)
For more detailed Cobbler documentation, see http://cobbler.github.io/manuals/.
SUSE only support those Cobbler functions that are either listed within our formal documentation or available via the web interface and API.
To use Cobbler for system installation with PXE, you will require a TFTP server. SUSE Manager installs a TFTP server by default.
To PXE boot systems, you will require a DHCP server, or have access to a network DHCP server. Edit the /etc/dhcp.conf
configuration file to change next-server
to the hostname or IP address of your Cobbler server.
Cobbler uses hostnames as a unique key for each system.
If you are using the pxe-default-image
to onboard bare metal systems, make sure every system has a unique hostname.
Non-unique hostnames will cause all systems with the same hostname to have the configuration filess overwritten when a provisioning profile is assigned.
Cobbler configuration is primarily managed using the /etc/cobbler/settings
file.
Cobbler will run with the default settings unchanged.
All configurable settings are explained in detail in the /etc/cobbler/settings
file, including information on each setting, and recommendations.
If SUSE Manager complains that language en
was not found within the list of supported languages available at /usr/share/YaST2/data/languages
, remove the lang
parameter in the /etc/cobbler/settings
file, or add a valid parameter such as en_US
.
For more on this topic, see https://www.suse.com/support/kb/doc?id=7018334.
Cobbler uses DHCP to automate bare metal installations from a PXE boot server. You must have administrative access to the network’s DHCP server, or be able to configure DHCP directly on the the Cobbler server.
If you have existing DHCP server, you will need to edit the DHCP configuration file so that it points to the Cobbler server and PXE boot image.
As root on the DHCP server, edit the /etc/dhcpd.conf
file and append a new class with options for performing PXE boot installation.
For example:
allow booting; allow bootp; 1 class "PXE" 2 {match if substring(option vendor-class-identifier, 0, 9) = "PXEClient"; 3 next-server 192.168.2.1; 4 filename "pxelinux.0";} 5
Enable network booting with the | |
Create a class called | |
A system configured to have PXE first in its boot priority identifies itself as | |
As a result, the DHCP server directs the system to the Cobbler server at | |
The DHCP server retrieves the |
It is possible to set up PXE booting in KVM, however we do not recommend you use this method for production systems.
This method can replace the next-server
setting on a DHCP server, as described in Section 10.2.2.1, “Configuring an Existing DHCP Server”.
Edit the network XML description with virsh
:
Produce an XML dump of the current description:
virsh net-dumpxml --inactive network > network.xml
Open the XML dump file at network.xml
with a text editor and add a bootp
parameter within the <dhcp>
` element:
<bootp file='/pxelinux.0' server='192.168.100.153'/>
Install the updated description:
virsh net-define network.xml
Alternatively, use the net-edit
subcommand, which will also perform some error checking.
<network>
<name>default</name>
<uuid>1da84185-31b5-4c8b-9ee2-a7f5ba39a7ee</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:29:59:18'/>
<domain name='default'/>
<ip address='192.168.100.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.100.128' end='192.168.100.254'/>
<bootp file='/pxelinux.0' server='192.168.100.153'/> 1
</dhcp>
</ip>
</network>
|
SUSE Manager uses the atftpd
daemon, but it can also use TFTP.
The atftpd
daemon is the recommended method for PXE serviices, and is installed by default.
Usually, you do not have to change its configuration, but if you have to, use the YaST Services Manager.
Before TFTP can serve the pxelinux.0
boot image, you must start the tftp service.
Start YaST and use › to configure the tftpd
daemon.
It is possible to synchronize Cobbler-generated TFTP contents to SUSE Manager proxies to perform PXE booting using proxies.
On the SUSE Manager Server as the root user, install the susemanager-tftpsync
package:
zypper install susemanager-tftpsync
On the SUSE Manager Proxy systems as the root user , install the susemanager-tftpsync-recv
package:
zypper install susemanager-tftpsync-recv
Execute configure-tftpsync.sh
on the SUSE Manager Proxy systems.
This setup script asks for hostnames and IP addresses of the SUSE Manager server and the proxy.
Additionally, it asks for the tftpboot
directory on the proxy.
For more information, see the output of configure-tftpsync.sh --help
.
As the root user, execute configure-tftpsync.sh
on SUSE Manager Server:
configure-tftpsync.sh proxy1.example.com proxy2.example.com
Execute cobbler sync
to initially push the files to the proxy systems.
This will succeed if all listed proxies are properly configured.
You can call configure-tftpsync.sh
to change the list of proxy systems.
You must always provide the full list of proxy systems.
If you reinstall an already configured proxy and want to push all the files again you must remove the cache file at /var/lib/cobbler/pxe_cache.json
before you can call cobbler sync
again.
The SUSE Manager Server must be able to access the SUSE Manager Proxy systems directly. You cannot push using a proxy.
Before starting the Cobbler service, run a check to make sure that all the prerequisites are configured according to your requirements using the cobbler check
command.
If configuration is correct, start the SUSE Manager server with this command:
/usr/sbin/spacewalk-service start
Do not start or stop the cobblerd
service independent of the SUSE Manager service.
Doing so may cause errors and other issues.
Always use /usr/sbin/spacewalk-service
to start or stop SUSE Manager.
If all Cobbler prerequisites have been met and Cobbler is running, you can use the Cobbler server as an installation source for a distribution:
Make installation files such as the kernel image and the initrd image available on the Cobbler server. Then add a distribution to Cobbler, using either the Web interface or the command line tools.
For information about creating and configuring AutoYaST or Kickstart distributions from the SUSE Manager interface, refer to Book “Reference Manual”, Chapter 7 “Systems”, Section 7.18 “Autoinstallation > Distributions”.
To create a distribution from the command line, use the cobbler
command as root:
cobbler distro add --name=`string`--kernel=`path`--initrd=`path`
--name=
string
optionA label used to differentiate one distribution choice from another (for example, sles12server
).
--kernel=
path
optionSpecifies the path to the kernel image file.
--initrd=
path
optionspecifies the path to the initial ram disk (initrd) image file.
Once you have added a distribution to Cobbler, you can add profiles.
Cobbler profiles associate a distribution with additional options like AutoYaST or Kickstart files. Profiles are the core unit of provisioning and there must be at least one Cobbler profile for every distribution added. For example, two profiles might be created for a Web server and a desktop configuration. While both profiles use the same distribution, the profiles are for different installation types.
For information about creating and configuring Kickstart and AutoYaST profiles in the SUSE Manager interface, refer to Book “Reference Manual”, Chapter 7 “Systems”, Section 7.15 “Autoinstallation > Profiles (Kickstart and AutoYaST)”.
Use the cobbler
command as root to create profiles from the command line:
cobbler profile add --name=string --distro=string [--kickstart=url] \ [--virt-file-size=gigabytes] [--virt-ram=megabytes]
--name=
string
A unique label for the profile, such as sles12webserver
or sles12workstation
.
--distro=
string
The distribution that will be used for this profile. For adding distributions, see Section 10.3, “Adding a Distribution to Cobbler”.
--kickstart=
url
The location of the Kickstart file (if available).
--virt-file-size=
gigabytes
The size of the virtual guest file image (in gigabytes). The default is 5 GB.
--virt-ram=
megabytes
The maximum amount of physical RAM a virtual guest can consume (in megabytes). The default is 512 MB.
Once the distributions and profiles for Cobbler have been created, add systems to Cobbler. System records map a piece of hardware on a client with the Cobbler profile assigned to run on it.
If you are provisioning using koan
and PXE menus alone, it is not required to create system records.
They are useful when system-specific Kickstart templating is required or to establish that a specific system should always get specific content installed.
If a client is intended for a certain role, system records should be created for it.
For information about creating and configuring automated installation from the SUSE Manager interface, refer to Book “Reference Manual”, Chapter 7 “Systems”, Section 7.3 “System Details”, Section 7.3.4 “. › [Management]”
Run this command as the root user to add a system to the Cobbler configuration:
cobbler system add --name=string --profile=string \ --mac-address=AA:BB:CC:DD:EE:FF
--name=
string
A unique label for the system, such as engineering_server
or frontoffice_workstation
.
--profile=
string
Specifies the name of one of the profiles added in Section 10.4, “Adding a Profile to Cobbler”.
--mac-address=
AA:BB:CC:DD:EE:FF
Allows systems with the specified MAC address to automatically be provisioned to the profile associated with the system record when they are being installed.
For more options, such as setting hostname or IP addresses, refer to the Cobbler manpage (man cobbler
).
The SUSE Manager web interface allows you to create variables for use with Kickstart distributions and profiles. For more information on creating Kickstart profile variables, refer to Book “Reference Manual”, Chapter 7 “Systems”, Section 7.15 “Autoinstallation > Profiles (Kickstart and AutoYaST)”, Section 7.15.1 “Create a Kickstart Profile”, Section 7.15.1.3 “Autoinstallation Details > Variables”.
Kickstart variables are part of an infrastructure change in SUSE Manager to support templating in Kickstart files. Kickstart templates are files that describe how to build Kickstart files, rather than creating specific Kickstarts. The templates are shared by various profiles and systems that have their own variables and corresponding values. These variables modify the templates and a template engine parses the template and variable data into a usable Kickstart file. Cobbler uses an advanced template engine called Cheetah that provides support for templates, variables, and snippets.
Advantages of using templates include:
Robust features that allow administrators to create and manage large amounts of profiles or systems without duplication of effort or manually creating Kickstarts for every unique situation.
While templates can become complex and involve loops, conditionals and other enhanced features and syntax, you can also create simpler Kickstart files without such complexity.
Kickstart templates can have static values for certain common items such as PXE image file names, subnet addresses, and common paths such as /etc/sysconfig/network-scripts/
.
However, templates differ from standard Kickstart files in their use of variables.
For example, a standard Kickstart file may have a networking section similar to this:
network --device=eth0 --bootproto=static --ip=192.168.100.24 \ --netmask=255.255.255.0 --gateway=192.168.100.1 --nameserver=192.168.100.2
In a Kickstart template file, the networking section would look like this instead:
network --device=$net_dev --bootproto=static --ip=$ip_addr \ --netmask=255.255.255.0 --gateway=$my_gateway --nameserver=$my_nameserver
These variables are substituted with the values set in your Kickstart profile variables or in your system detail variables. If the same variable is defined in both the profile and the system detail, then the system detail variable takes precedence.
The template for the autoinstallation has syntax rules which relies on punctuation symbols. To avoid clashes, they need to be properly treated.
In case the autoinstallation scenario contains any shell script using variables like $(example)
, its content should be escaped by using the backslash symbol: \$(example)
.
If the variable named example
is defined in the autoinstallation snippet, the templating engine will evaluate $example
with its content.
If there is no such variable, the content will be left unchanged.
Escaping the $ symbol will prevent the templating engine from evaluating the symbol as an internal variable.
Long scripts or strings can be escaped by wrapping them with the \#raw
and \#end raw
directives.
For example:
#raw #!/bin/bash for i in {0..2}; do echo "$i - Hello World!" done #end raw
Also, pay attention to scenarios like this:
#start some section (this is a comment) echo "Hello, world" #end some section (this is a comment)
Any line with a # symbol followed by a whitespace is treated as a comment and is therefore not evaluated.
For more information about Kickstart templates, refer to the Cobbler project page at:
If you have common configurations across all Kickstart templates and profiles, you can use the Snippets feature of Cobbler to take advantage of code reuse.
Kickstart snippets are sections of Kickstart code that can be called by a $SNIPPET()
function that will be parsed by Cobbler and substituted with the contents of the snippet.
For example, you might have a common hard drive partition configuration for all servers, such as:
clearpart --all part /boot --fstype ext3 --size=150 --asprimary part / --fstype ext3 --size=40000 --asprimary part swap --recommended part pv.00 --size=1 --grow volgroup vg00 pv.00 logvol /var --name=var vgname=vg00 --fstype ext3 --size=5000
Save this snippet of the configuration to a file like my_partition
and place the file in /var/lib/cobbler/snippets/
, where Cobbler can access it.
Use the snippet by calling the $SNIPPET()
function in your Kickstart templates.
For example:
$SNIPPET('my_partition')
Wherever you invoke that function, the Cheetah parser will substitute the function with the snippet of code contained in the my_partition
file.
Whether you are provisioning guests on a virtual machine or reinstalling a new distribution on a running system, Koan works in conjunction with Cobbler to provision systems.
If you have created a virtual machine profile as documented in Section 10.4, “Adding a Profile to Cobbler”, you can use koan
to initiate the installation of a virtual guest on a system.
For example, create a Cobbler profile with the following command:
cobbler add profile --name=virtualfileserver \ --distro=sles12-x86_64-server --virt-file-size=20 --virt-ram=1000
This profile is for a fileserver running SUSE Linux Enterprise Server 12 with a 20 GB guest image size and allocated 1 GB of system RAM.
To find the name of the virtual guest system profile, use the koan
command:
koan --server=hostname --list-profiles
This command lists all the available profiles created with cobbler profile add
.
Create the image file, and begin installation of the virtual guest system:
koan --virt --server=cobbler-server.example.com \ --profile=virtualfileserver --virtname=marketingfileserver
This command specifies that a virtual guest system be created from the Cobbler server (hostname cobbler-server.example.com
) using the virtualfileserver
profile.
The virtname
option specifies a label for the virtual guest, which by default is the system’s MAC address.
Once the installation of the virtual guest is complete, it can be used as any other virtual guest system.
koan
can replace a still running system with a new installation from the available Cobbler profiles by executing the following command on the system to be reinstalled:
koan --replace-self --server=hostname --profile=name
This command, running on the system to be replaced, will start the provisioning process and replace the system with the profile in --profile=name
on the Cobbler server specified in --server=hostname
.
Some environments might lack PXE support.
The Cobbler buildiso
command creates a ISO boot image containing a set of distributions and kernels, and a menu similar to PXE network installations.
Define the name and output location of the boot ISO using the --iso
option.
Depending on Cobbler-related systemd settings (see /usr/lib/systemd/system/cobblerd.service
) writing ISO images to public tmp
directories will not work.
cobbler buildiso --iso=/path/to/boot.iso
The boot ISO includes all profiles and systems by default.
Limit these profiles and systems using the --profiles
and --systems
options.
cobbler buildiso --systems="system1,system2,system3" \ --profiles="profile1,profile2,profile3"
Building ISOs with the cobbler buildiso
command is supported for all architectures except the z Systems architecture.
Systems that have not yet been provisioned are called bare metal systems.
You can provision bare metal systems using Cobbler.
Once a bare metal system has been provisioned in this way, it will appear in the Systems
list, where you can perform regular provisioning with autoinstallation, for a completely unattended installation.
To successfully provision a bare metal system, you will require a fully patched SUSE Manager server, version 2.1 or higher.
The system to be provisioned must have x86_64 architecture, with at least 1 GB RAM, and be capable of PXE booting.
The server uses TFTP to provision the bare metal client, so the appropriate port and networks must be configured correctly in order for provisioning to be successful. In particular, ensure that you have a DHCP server, and have set the next-server
parameter to the SUSE Manager server IP address or hostname.
Bare metal systems management can be enabled or disabled in the Web UI by clicking
› › .New systems are added to the organization of the administrator who enabled the bare metal systems management feature. To change the organization, log in as an Administrator of the required organization, and re-enable the feature.
Once the feature has been enabled, any bare metal system connected to the server network will be automatically added to the organization when it is powered on.
The process can take a few minutes, and the system will automatically shut down once it is complete.
After the reboot, the system will appear in the Systems
list.
Click on the name of the system to see basic information, or go to the Properties
, Notes
, and Hardware
tabs for more details.
You can migrate bare metal systems to other organizations if required, using the Migrate
tab.
Provisioning bare metal systems is similar to provisioning other systems, and can be done using the Provisioning
tab.
However, you will not be able to schedule provisioning, it will happen automatically as soon as the system is configured and powered on.
System Set Manager can be used with bare metal systems, although not all features will be available, because bare metal systems do not have an operating system installed. This limitation also applies to mixed sets that contain bare metal systems; all features will be re-enabled if the bare metal systems are removed from the set.
If a bare metal system on the network is not automatically added to the Systems
list, check these things first:
You must have the pxe-default-image
package installed.
File paths and parameters must be configured correctly. Check that the vmlinuz0
and initrd0.img
files, which are provided by pxe-default-image
, are in the locations specified in the rhn.conf
configuration file.
Ensure the networking equipment connecting the bare metal system to the SUSE Manager server is working correctly, and that you can reach the SUSE Manager server IP address from the server.
The bare metal system to be provisioned must have PXE booting enabled in the boot sequence, and must not be attempting to boot an operating system.
The DHCP server must be responding to DHCP requests during boot. Check the PXE boot messages to ensure that:
the DHCP server is assigning the expected IP address
the DHCP server is assigning the the SUSE Manager server IP address as next-server
for booting.
Ensure Cobbler is running, and that the Discovery feature is enabled.
If you see a blue Cobbler menu shortly after booting, discovery has started. If it does not complete successfully, temporarily disable automatic shutdown in order to help diagnose the problem. To disable automatic shutdown:
Select pxe-default-profile
in the Cobbler menu with the arrow keys, and press the Tab key before the timer expires.
Add the kernel boot parameter spacewalk-finally=running
using the integrated editor, and press Enter to continue booting.
Enter a shell with the username root
and password linux
to continue debugging.
Due to a technical limitation, it is not possible to reliably distinguish a new bare metal system from a system that has previously been discovered. Therefore, we recommended that you do not power on bare metal systems multiple times, as this will result in duplicate profiles.
SUSE Manager allows you to autoinstall and manage Xen and KVMVM Guests on a registered VM Host Server. To autoinstall a VM Guest, an autoinstallable distribution and an autoinstallation profile (AutoYaST or {kickstart}) need to exist on SUSE Manager. VM Guests registered with SUSE Manager can be managed like “regular” machines. In addition, basic VM Guestmanagement tasks such as (re)starting and stopping or changing processor and memory allocation can be carried out using SUSE Manager.
The following documentation is valid in the context of traditional clients. Salt minions must be treated differently:
Autoinstallation is still not supported and libvirt hosts are supported as read-only.
Autoinstalling and managing VM Guests via SUSE Manager is limited to Xen and KVM guests.
SUSE Manager uses libvirt
for virtual machine management.
Currently, virtual machines from other virtualization solutions such as VMware* or VirtualBox*, are recognized as VM Guests, but cannot be managed from within SUSE Manager.
With SUSE Manager you can automatically deploy Xen and KVM VM Guests using AutoYaST or {kickstart} profiles. It is also possible to automatically register the VM Guests, so they can immediately be managed by SUSE Manager.
Setting up and managing VM Guest s with SUSE Manager does not require special configuration options. However, you need to provide activation keys for the VM Host Server and the VM Guest s, an autoinstallable distribution and an autoinstallation profile. To automatically register VM Guest s with SUSE Manager , a bootstrap script is needed.
Just like any other client, VM Host Server and VM Guest s need to be registered with SUSE Manager using activation keys. Find details on how to set up activation keys at Book “Getting Started”, Chapter 5 “Registering Clients”, Section 5.2 “Creating Activation Keys”. While there are no special requirements for a VM Guest key, at least the following requirements must be met for the VM Host Server activation key.
Entitlements: Provisioning, Virtualization Platform.
Packages: rhn-virtualization-host
, osad
.
If you want to manage the VM Host Server
system from SUSE Manager
(e.g.
by executing remote scripts), the package rhncfg-actions
needs to be installed as well.
To autoinstall clients from SUSE Manager , you need to provide an “autoinstallable distribution” , also referred to as autoinstallable tree or installation source. This installation source needs to be made available through the file system of the SUSE Manager host. It can for example be a mounted local or remote directory or a “loop-mounted” ISO image. It must match the following requirements:
Kernel and initrd location:
images/pxeboot/vmlinuz
images/pxeboot/initrd.img
boot/arch
/loader/initrd
boot/arch
/loader/linux
The
needs to match the autoinstallable distribution.There is a fundamental difference between RedHat and SUSE systems regarding the package sources for autoinstallation. The packages for a RedHat installation are being fetched from the
. Packages for installing SUSE systems are being fetched from the autoinstallable distribution.As a consequence, the autoinstallable distribution for a SUSE system has to be a complete installation source (same as for a regular installation).
Make sure an installation source is available from a local directory. The data source can be any kind of network resource, a local directory or an ISO image (which has to be “loop-mounted” ). Files and directories must be world readable.
Log in to the SUSE Manager Web UI} and navigate to
› › › .Fill out the form
as follows:Choose a unique name for the distribution. Only letters, numbers, hyphens, periods, and underscores are allowed; the minimum length is 4 characters. This field is mandatory.
Absolute local disk path to installation source. This field is mandatory.
Channel matching the installation source. This channel is the package source for non-SUSE installations. This field is mandatory.
Operating system version matching the installation source. This field is mandatory.
Options passed to the kernel when booting for the installation.
There is no need to specify the install=
parameter since it will automatically be added.
Moreover, the parameters self_update=0 pt.options=self_update
are added automatically to prevent AutoYaST from updating itself during the system installation.
This field is optional.
Options passed to the kernel when booting the installed system for the first time. This field is optional.
Save your settings by clicking
.To edit an existing
› › and click on a › .Autoinstallation profiles (AutoYaST or {kickstart} files) contain all the installation and configuration data needed to install a system without user intervention. They may also contain scripts that will be executed after the installation has completed.
All profiles can be uploaded to SUSE Manager and be edited afterwards. Kickstart profiles can also be created from scratch with SUSE Manager .
A minimalist AutoYaST profile including a script for registering the client with SUSE Manager is listed in Appendix B, Minimalist AutoYaST Profile for Automated Installations and Useful Enhancements. For more information, examples and HOWTOs on AutoYaST profiles, refer to SUSE Linux Enterprise AutoYaST (https://www.suse.com/documentation/sles-12/book_autoyast/data/book_autoyast.html). For more information on {kickstart} profiles, refer to your RedHat documentation.
You need the installation media to setup the distribution. Starting with version 15, there is only one installation media. You will use the same one for SLES, SLED, and all the other SUSE Linux Enterprise 15 based products.
In the AutoYaST profile specify which product is to be installed.
For installing SUSE Linux Enterprise Server use the following snippet in autoyast.xml
:
<products config:type="list"> <listentry>SLES</listentry> </products>
Then then specify all the required modules as add-on
in autoyast.xml
. This is a minimal SLE-Product-SLES15-Pool
selection that will result in a working
installation and can be managed by SUSE Manager:
SLE-Manager-Tools15-Pool
SLE-Manager-Tools15-Updates
SLE-Module-Basesystem15-Pool
SLE-Module-Basesystem15-Updates
SLE-Product-SLES15-Updates
It is also recommended to add the following modules:
SLE-Module-Server-Applications15-Pool
SLE-Module-Server-Applications15-Updates
Log in to the SUSE Manager Web interface and open
› › › .Choose a unique name for the profile. Only letters, numbers, hyphens, periods, and underscores are allowed; the minimum length is 6 characters. This field is mandatory.
Choose an Section 11.1.1.2, “Setting up an Autoinstallable Distribution” for instructions.
› is available, you need to add an Autoinstallable Distribution. Refer toChoose a
› here.Scroll down to the
› › to select it, then click .The uploaded file will be displayed in the
section, where you can edit it.Click
to store the profile.To edit an existing profile, go to
› › and click on a › .If you are changing the
› › › tab to verify these settings when changing the .Currently it is only possible to create autoinstallation profiles for RHEL systems. If installing a SUSE Linux Enterprise Server system, you need to upload an existing AutoYaST profile as described in Section 11.1.1.4.1, “Uploading an Autoinstallation Profile”.
Log in to the SUSE Manager Web interface and go to
› › › .Choose a unique name for the profile. The minimum length is 6 characters. This field is mandatory.
Choose a
› . This field is mandatory.Choose an Section 11.1.1.2, “Setting up an Autoinstallable Distribution” for instructions.
› is available, you need to add an Autoinstallable Distribution. Refer toChoose a
› here.Click the
button.Select the location of the distribution files for the installation of your VM Guest s. There should already be a
› button.Choose a root password for the VM Guest s. Click the
button to generate the profile.This completes {kickstart} profile creation. After generating a profile, you are taken to the newly-created {kickstart} profile. You may browse through the various tabs of the profile and modify the settings as you see fit, but this is not necessary as the default settings should work well for the majority of cases.
A VM Guest that is autoinstalled does not get automatically registered. Adding a section to the autoinstallation profile that invokes a bootstrap script for registration will fix this. The following procedure describes adding a corresponding section to an AutoYaST profile. Refer to your RedHat Enterprise Linux documentation for instructions on adding scripts to a {kickstart} file.
First, provide a bootstrap script on the SUSE Manager :
Create a bootstrap script for VM Guest s on the SUSE Manager as described in Book “Getting Started”, Chapter 5 “Registering Clients”, Section 5.4 “Registering Traditional Clients”, Section 5.4.1 “Generating a Bootstrap Script”.
Log in as root to the konsole of SUSE Manager and go to /srv/www/htdocs/pub/bootstrap
. Copy bootstrap.sh
(the bootstrap script created in the previous step) to e.g. bootstrap_vm_guests.sh
in the same directory.
Edit the newly created file according to your needs. The minimal requirement is to include the activation key for the VM Guest s (see Section 11.1.1.1, “Activation Keys” for details). We strongly recommend to also include one or more GPG keys (for example, your organization key and package signing keys).
Log in to the SUSE Manager Web interface and go to
› › . Click on the profile that is to be used for autoinstalling the VM Guest s to open it for editing.Scroll down to the </profile>
tag and replace the given IP address with the address of the SUSE Manager
server.
See Appendix B, Minimalist AutoYaST Profile for Automated Installations and Useful Enhancementsfor an example script.
<scripts> <init-scripts config:type="list"> <script> <interpreter>shell </interpreter> <location> http://`192.168.1.1`/pub/bootstrap/bootstrap_vm_guests.sh </location> </script> </init-scripts> </scripts>
<scripts>
section allowedIf your AutoYaST
profile already contains a <scripts>
section, do not add a second one, but rather place the <script>
part above within the existing <scripts>
section!
Click
to save the changes.A VM Host Server system serving as a target for autoinstalling VM Guest s from SUSE Manager must be capable of running guest operating systems. This requires either KVM or Xen being properly set up. For installation instructions for SUSE Linux Enterprise Server systems refer to the SLES Virtualization Guide available from https://www.suse.com/documentation/sles-12/book_virt/data/book_virt.html. For instructions on setting up a RedHat VM Host Server refer to your RedHat Enterprise Linux documentation.
Since SUSE Manager
uses libvirt
for VM Guest
installation and management, the libvirtd
needs to run on the VM Host Server
.
The default libvirt
configuration is sufficient to install and manage VM Guest
s from SUSE Manager
.
However, in case you want to access the VNC console of a VM Guest
as a non-root
user, you need to configure libvirt
appropriately.
Configuration instructions for libvirt
on SUSE Linux Enterprise Server
are available in the SLES Virtualization
Guide
available from https://www.suse.com/documentation/sles-12/book_virt/data/book_virt.html available from http://www.suse.com/documentation/sles11/.
For instructions for a RedHat VM Host Server
refer to your RedHat Enterprise Linux documentation.
Apart from being able to serve as a host for KVM
or Xen
guests, which are managed by libvirt
, a VM Host Server
must be registered with SUSE Manager
.
Make sure either KVM or Xen is properly set up.
Make sure the libvirtd
is running.
Register the VM Host Server with SUSE Manager :
Create a bootstrap script on the SUSE Manager as described in Book “Getting Started”, Chapter 5 “Registering Clients”, Section 5.4 “Registering Traditional Clients”, Section 5.4.1 “Generating a Bootstrap Script”.
Download the bootstrap script from susemanager.example.com/pub/bootstrap/bootstrap.sh
to the VM Host Server .
Edit the bootstrap script according to your needs. The minimal requirement is to include the activation key for the VM Host Server (see Section 11.1.1.1, “Activation Keys” for details). We strongly recommend to also include one or more GPG keys (for example, your organization key and package signing keys).
Execute the bootstrap script to register the VM Host Server .
If the VM Host Server is registered as a Salt minion, a final configuration step is needed in order to gather all the guest VMs defined on the VM Host Server:
From the Add-on System Type
Virtualization Host
and confirm with .
Schedule a Hardware Refresh. On the
› page click .When the registration process is finished and all packages have been installed, enable the osad
(Open Source Architecture Daemon).
On SUSE Linux Enterprise Server 12 systems and later, it can be achieved by running the following commands as user root:
service rhnsd stop service rhnsd disable
service osad enable service osad start
osad
Together with rhnsd
rhnsd
daemon checks for scheduled actions every four hours, so it can take up to four hours before a scheduled action is carried out.
If many clients are registered with SUSE Manager, this long interval ensures a certain level of load balancing since not all clients act on a scheduled action at the same time.
For setting the time interval, see <<[[bp.systems.management>>.
However, when managing VM Guests, you usually want actions like rebooting a VM Guest to be carried out immediately, which can be done by adding osad
.
The osad
daemon receives commands over the jabber protocol from SUSE Manager and commands are instantly executed.
Alternatively you may schedule actions to be carried out at a fixed time in the future (whereas with rhnsd
you can only schedule for a time in the future plus up to four hours).
When all requirements on the SUSE Manager and the VM Host Server are met, you can start to autoinstall VM Guests on the host. Note that VM Guests will not be automatically registered with SUSE Manager, therefore we strongly recommend to modify the autoinstallation profile as described in Section 11.1.1.4.3, “Adding a Registration Script to the Autoinstallation Profile”. VM Guests need to be registered to manage them with SUSE Manager. Proceed as follows to autoinstall a VM Guest:
It is not possible to install more than one VM Guest at a time on a single VM Host Server. When scheduling more than one autoinstallation with SUSE Manager make sure to choose a timing, that starts the next installation after the previous one has finished. If a guest installation starts while another one is still running, the running installation will be cancelled.
Log in to the SUSE Manager Web interface and click the
tab.Click the VM Host Server 's name to open its
page.Open the form for creating a new VM Guest by clicking
› . Fill out the form by choosing an autoinstallation profile and by specifying a name for the VM Guest (must not already exist on VM Host Server ). Choose a proxy if applicable and enter a schedule. To change the VM Guest 's hardware profile and configuration options, click .Finish the configuration by clicking
› page opens for you to monitor the autoinstallation process.To view the installation log, click
› on the › page you can click a entry to view a detailed log.In case an installation has failed, you can
it from this page once you have corrected the problem. You do not have to configure the installation again.If the event log does not contain enough information to locate a problem, log in to the VM Host Server
console and read the log file /var/log/up2date
.
If you are using rhnsd
, you may alternatively immediately trigger any scheduled actions by calling rhn_ckeck
on the VM Host Server.
Increase the command’s verbosity by using the options -v
, -vv
, or -vvv
, respectively.
Basic VM Guest management actions such as restarting or shutting down a virtual machine as well as changing the CPU and memory allocation can be carried out in the SUSE Manager Web interface if the following requirements are met:
VM Host Server must be a KVM or Xen host.
libvirtd
must be running on VM Host Server .
VM Host Server must be registered with SUSE Manager.
In addition, if you want to see the profile of the VM Guest, install packages, etc., you must also register it with SUSE Manager.
All actions can be triggered in the SUSE Manager Web UI from the
› tab. On the resulting page, click the VM Host Server 's name and then on . This page lists all VM Guest s for this host, known to SUSE Manager .Click the name of a VM Guest on the VM Host Server 's Book “Reference Manual”, Chapter 7 “Systems”.
page to open its profile page with detailed information about this guest. For details, refer toA profile page for a virtual system does not differ from a regular system’s profile page. You can perform the same actions (e.g. installing software or changing its configuration).
To start, stop, restart, suspend, or resume a VM Guest , navigate to the VM Host Server 's
› listed in the table and scroll down to the bottom of the page. Choose an action from the drop-down list and click › the action on the next page.Automatically restarting a VM Guest
when the VM Host Server
reboots is not enabled by default on VM Guest
s and cannot be configured from SUSE Manager
.
Refer to your KVM
or Xen
documentation.
Alternatively, you may use libvirt
to enable automatic reboots.
To change the CPU or RAM allocation of a VM Guest navigate to the VM Host Server 's
› from the table and scroll down to the bottom of the page. Choose an action from the › followed by .The memory allocation can be changed on the fly, provided the memory ballooning driver is installed on the VM Guest . If this is not the case, or if you want to change the CPU allocation, you need to shutdown the guest first. Refer to Section 11.2.2, “Starting, Stopping, Suspending and Resuming a VM Guest” for details.
To delete a VM Guest you must first shut it down as described in Section 11.2.2, “Starting, Stopping, Suspending and Resuming a VM Guest”. Wait at least two minutes to allow the shutdown to finish and then choose › and .
Foreign virtual hosts (such as vCenter and vSphere ESXi) can be inventoried using the Virtual Host Manager
.
From the vSphere Client you can define roles and permissions for vCenter and vSphere ESXi users allowing vSphere objects and resources to be imported and inventoried by SUSE Manager.
Objects and resources are then displayed as foreign hosts on the SUSE Manager › page.
The following sections will guide you through:
Requirements
Overview of permissions and roles
Adding vCenter and vSphere ESXi hosts to SUSE Manager
This table displays the default API communication port and required access rights for inventorying objects and resources:
Ports / Permissions | Description |
---|---|
443 | Default port that SUSE Manager uses to access the ESXi API for obtaining infrastructure data |
read-only | All vCenter/ESXi objects and resources that should be inventoried by the Virtual Host Manager should be at least assigned the read-only role. Mark objects and resources with no-access to exclude them from the inventory. |
This section will guide you through assigning user permissions and roles in vCenter/ESXi.
A user is someone who has been authorized to access an ESXi host. The Virtual Host Manager (located on the SUSE Manager server) will inventory ESXi data defined by assigned roles and permissions on a user account.
For example: The user John has been assigned the read-only access role to all servers and datacenters in his company with one exception. John’s account has been assigned the no-access role on the company’s Financial Database server. You decide to use John’s user account and add the ESXi host to SUSE Manager. During the inventory the Financial Database server will be excluded.
Keep user access roles in mind when planning to add ESXi hosts to SUSE manager. Note that SUSE Manager will not inventory any objects or resources assigned with the no-access role on any user account.
When planning to add new ESXi hosts to SUSE Manager, consider if the roles and permissions assigned users require need to be inventoried by SUSE Manager.
See the official vSphere documentation on adding new users and assigning roles.
This procedure guides you through inventorying a VSphere ESXi host with SUSE Manager.
From the SUSE Manager Web UI select
› › from the left navigation bar.From the upper right corner of the Virtual Host Managers page select VMWare-based.
From the Add a VMware-based Virtual Host Manager page complete these fields with your ESXi host data:
Custom name for your Virtual Host Manager
Fully-qualified domain name (FQDN) or host IP address
Default ESXi API port
Assign a username
Remember that only objects and resources which match a user’s defined role will be inventoried. Set the user’s role on objects and resources you want inventoried to read-only.
ESXi users password
Click the
button.From the
› page select the new Virtual Host manager.From the
› page click the button.If you do not refresh the data from a new Virtual Host Manager, host data will not be inventoried and therefore will not be displayed under
› .View inventoried ESXi host objects and resources by selecting
› .In addition to the SUSE Manager Web interface, SUSE Manager offers two command line tools for managing (Traditional) system configuration files :
The Configuration Client (mgrcfg-client)
The Configuration Manager (mgrcfg-manager)
You can use the mgrcfg-actions tool to enable and disable configuration management on client systems.
To work with these tools install them from the SUSE Manager Tools child channel as root.
zypper in rhncfg-manager
zypper in rhncfg-actions
Install the following package on client systems:
zypper in rhncfg-client
When a configuration file is deployed via SUSE Manager, a backup of the previous file including its full path is stored in the /var/lib/rhncfg/backups/
.
The backup retains its filename but has a .rhn-cfg-backup extension appended.
The Actions Control (mgr-actions-control) application is used to enable and disable configuration management on a system. Client systems cannot be managed in this fashion by default. This tool allows SUSE Manager administrators to enable or disable specific modes of allowable actions such as:
Deploying a configuration file on the system
Uploading a file from the system
Using the diff command to find out what is currently managed on a system with what is available
Running remote commands
These various modes are enabled/disabled by placing/removing files and directories in the /etc/sysconfig/rhn/allowed-actions/
directory.
Due to the default permissions of the /etc/sysconfig/rhn/
directory, Actions Control has to be run by someone with root access.
There is a manpage available, as for most command line tools. First, decide which scheduled actions should be enabled for use by system administrators. The following options enable the various scheduled action modes:
Allow mgrcfg-client to deploy files.
Allow mgrcfg-client to diff files.
Allow mgrcfg-client to upload files.
Allow mgrcfg-client to upload mtime (file modification time).
Allow mgrcfg-client to do everything.
Enable running scripts.
Disable deployment.
Prohibit diff use.
No file uploads allowed.
Disable mtime upload.
Disable all options.
No scripts allowed to run.
Report whether modes are enabled or disabled.
Force the operation without asking first.
Show help message and exit.
Once a mode is set, your system is ready for configuration management through SUSE Manager.
A common option is mgr-actions-control --enable-all
.
The Configuration Client (mgrcfg-client) is installed on and run from an individual client system to gain knowledge about how SUSE Manager deploys configuration files to the client.
The Configuration Client offers these primary modes:
list
get
channels
diff
verify
To list the configuration files for the machine and the labels of the config channels containing them, issue the command:
mgrcfg-client list
The output resembles the following list ("DoFoS" is a shortcut for "D or F or S", which means "Directory", "File", or "Something else"(?)):
DoFoS Config Channel File F config-channel-17 /etc/example-config.txt F config-channel-17 /var/spool/aalib.rpm F config-channel-14 /etc/rhn/rhn.conf
These configuration files apply to your system. However, there may be duplicate files present in other channels. For example, issue the following command:
mgrcfg-manager list config-channel-14
and observe the following output:
Files in config channel 'config-channel-14' /etc/example-config.txt /etc/rhn/rhn.conf
You may wonder why the second version of /etc/example-config.txt
in config-channel-14 does not apply to the client system.
The rank of the /etc/example-config.txt
file in config-channel-17 was higher than that of the same file in config-channel-14.
As a result, the version of the configuration file in config-channel-14 is not deployed for this system, therefore mgrcfg-client command does not list the file.
To download the most relevant configuration file for the machine, issue the command:
mgrcfg-client get /etc/example-config.txt
You should see output resembling:
Deploying /etc/example-config.txt
View the contents of the file with less or another pager. Note that the file is selected as the most relevant based on the rank of the config channel containing it. This is accomplished within the Configuration tab of the System Details page.
Refer to Section "System Details" (Chapter 4, Systems, User Guide) for instructions.
To view the labels and names of the config channels that apply to the system, issue the command:
mgrcfg-client channels
You should see output resembling:
Config channels: Label Name ----- ---- config-channel-17 config chan 2 config-channel-14 config chan 1
The list of options available for mgrcfg-client get
:
Make all file operations relative to this string.
Exclude a file from being deployed with get. May be used multiple times.
Show help message and exit.
To view the differences between the config files deployed on the system and those stored by SUSE Manager, issue the command:
mgrcfg-client diff
The output resembles the following:
rhncfg-client diff --- /etc/test +++ /etc/test 2013-08-28 00:14:49.405152824 +1000 @@ -1 +1,2 @@ This is the first line +This is the second line added
In addition, you can include the --topdir
option to compare config files with those located in an arbitrary (and unused) location on the client system, like this:
# mgrcfg-client diff --topdir /home/test/blah/ /usr/bin/diff: /home/test/blah/etc/example-config.txt: No such file or directory /usr/bin/diff: /home/test/blah/var/spool/aalib.rpm: No such file or directory
To quickly determine if client configuration files are different from those associated with it via SUSE Manager, issue the command:
mgrcfg-client verify
The output resembles the following:
modified /etc/example-config.txt /var/spool/aalib.rpm
The file example-config.txt
is locally modified, while aalib.rpm
is not.
The list of the options available for mgrcfg-client verify:
Increase the amount of output detail. Display differences in the mode, owner, and group permissions for the specified config file.
Only show differing files.
Show help message and exit.
The Configuration Manager (mgrcfg-manager) is designed to maintain SUSE Manager’s central repository of config files and channels, not those located on client systems. This tool offers a command line alternative to the configuration management features in the SUSE Manager Web interface. Additionally, some or all of the related maintenance tasks can be scripted.
To use the command line interface, configuration administrators require a SUSE Manager account (username and password) with the appropriate permission set.
The username may be specified in /etc/sysconfig/rhn/rhncfg-manager.conf
or in the [rhncfg-manager]
section of ~/.rhncfgrc
.
When the Configuration Manager is run as root, it attempts to pull in needed configuration values from the Red Hat Update Agent.
When run as a user other than root, you may have to change the ~/.rhncfgrc
configuration file.
The session file is cached in ~/.rhncfg-manager-session
to avoid having to log in for every command.
The default timeout for the Configuration Manager is 30 minutes.
To adjust this, add the server.session_lifetime
option and a new value to the /etc/rhn/rhn.conf
file on the server running the manager.
For example set the time out to 120 minutes:
server.session_lifetime = 120
The Configuration Manager offers the following primary modes:
add
create-channel
diff
diff-revisions
download-channel
get
list
list-channels
remove
remove-channel
revisions
update
upload-channel
Each mode offers its own set of options, which can be displayed by issuing the following command:
mgrcfg-manager mode --help
Replace mode with the name of the mode whose options you want to see:
mgrcfg-manager diff-revisions --help
To create a config channel for your organization, issue the command:
mgrcfg-manager create-channel channel-label
If prompted for your SUSE Manager username and password, provide them. Once you have created a config channel, use the remaining modes listed above to populate and maintain that channel.
To add a file to a config channel, specify the channel label and the local file to be uploaded:
mgrcfg-manager add --channel=channel-label /path/to/file
In addition to the required channel label and the path to the file, you can use the available options for modifying the file during its addition.
For instance, you can alter the path and file name by including the --dest-file
option in the command:
mgrcfg-manager add --channel=channel-label \ --dest-file=/new/path/to/file.txt/path/to/file
The output resembles the following:
Pushing to channel example-channel Local file >/path/to/file -> remote file /new/path/to/file.txt
The list of options available for mgrcfg-manager add:
Upload files in this config channel.
Upload the file as this path.
Start delimiter for variable interpolation.
End delimiter for variable interpolation.
Ignore missing local files.
Show help message and exit.
By default, the maximum file size for configuration files is 128KB.
If you need to change that value, find or create the following line in the /etc/rhn/rhn.conf
file:
web.maximum_config_file_size=128
Change the value from 128 to whatever limit you need in kilobytes.
To view the differences between the config files on disk and the latest revisions in a channel, issue the command:
mgrcfg-manager diff --channel=channel-label --dest-file=/path/to/file.txt \ /local/path/to/file
You should see output resembling:
--- /tmp/dest_path/example-config.txt config_channel: example-channel revision: 1 +++ /home/test/blah/hello_world.txt 2003-12-14 19:08:59.000000000 -0500 @@ -1 +1 @@ -foo +hello, world
The list of options available for mgrcfg-manager diff
:
Get file(s) from this config channel.
Use this revision.
Upload the file at this path.
Make all files relative to this string.
Show help message and exit.
To compare different versions of a file across channels and revisions, use the -r flag to indicate which revision of the file should be compared and the -n flag to identify the two channels to be checked. Specify only one file name here since you are comparing the file against another version of itself. For example:
mgrcfg-manager diff-revisions -n=channel-label1 -r=1 \ -n=channel-label2 -r=1 \ /path/to/file.txt
The output resembles the following:
--- /tmp/dest_path/example-config.txt 2004-01-13 14:36:41 \ config channel: example-channel2 revision: 1 --- /tmp/dest_path/example-config.txt 2004-01-13 14:42:42 \ config channel: example-channel3 revision: 1 @@ -1 +1,20 @@ -foo +blah +-----BEGIN PGP SIGNATURE----- +Version: GnuPG v1.0.6 (GNU/Linux) +Comment: For info see http://www.gnupg.org + +iD8DBQA9ZY6vse4XmfJPGwgRAsHcAJ9ud9dabUcdscdcqB8AZP7e0Fua0NmKsdhQCeOWHX +VsDTfen2NWdwwPaTM+S+Cow= +=Ltp2 +-----END PGP SIGNATURE-----
The list of options available for mgrcfg-manager diff-revisions
:
Use this config channel.
Use this revision.
Show help message and exit.
To download all the files in a channel to disk, create a directory and issue the following command:
mgrcfg-manager download-channel channel-label --topdir .
The output resembles the following:
Copying /tmp/dest_path/example-config.txt -> \ blah2/tmp/dest_path/example-config.txt
The list of options available for mgrcfg-manager download-channel:
Directory to which all the file paths are relative. This option must be set.
Show help message and exit.
To direct the contents of a particular file to stdout, issue the command:
mgrcfg-manager get --channel=channel-label \ /tmp/dest_path/example-config.txt
You should see the contents of the file as the output.
To list all the files in a channel, issue the command:
mgrcfg-manager list channel-label
You should see output resembling:
Files in config channel `example-channel3': /tmp/dest_path/example-config.txt
The list of the options available for mgrcfg-manager get:
Get file(s) from this config channel.
Directory to which all files are relative.
Get this file revision.
Show help message and exit.
To list all of your organization’s configuration channels, issue the command:
mgrcfg-manager list-channels
The output resembles the following:
Available config channels: example-channel example-channel2 example-channel3 config-channel-14 config-channel-17
This does not list local_override or server_import channels.
To remove a file from a channel, issue the command:
mgrcfg-manager remove --channel=channel-label /tmp/dest_path/example-config.txt
If prompted for your SUSE Manager username and password, provide them.
The list of the options available for mgrcfg-manager remove:
Remove files from this config channel.
Directory to which all files are relative.
Show help message and exit.
To remove a config channel in your organization, issue the command:
mgrcfg-manager remove-channel channel-label
The output resembles the following:
Removing config channel example-channel Config channel example-channel removed
To find out how many revisions (from 1 to N where N is an integer greater than 0) of a file/path are in a channel, issue the following command:
mgrcfg-manager revisions channel-label /tmp/dest_path/example-config.txt
The output resembles the following:
Analyzing files in config channel example-channel \ /tmp/dest_path/example-config.txt: 1
To create a new revision of a file in a channel (or to add the first revision to that channel if none existed before for the given path), issue the following command:
mgrcfg-manager update --channel=channel-label \ --dest-file=/path/to/file.txt /local/path/to/file
The output resembles the following:
Pushing to channel example-channel: Local file example-channel /tmp/local/example-config.txt -> \ remote file /tmp/dest_path/example-config.txt
The list of the options available for mgrcfg-manager update:
Upload files in this config channel.
Upload the file to this path.
Directory to which all files are relative.
Start delimiter for variable interpolation.
End delimiter for variable interpolation.
Show help message and exit.
To upload multiple files to a config channel from a local disk at once, issue the command:
mgrcfg-manager upload-channel --topdir=topdir channel-label
The output resembles the following:
Using config channel example-channel4 Uploading /tmp/ola_world.txt from blah4/tmp/ola_world.txt
The list of the options available for mgrcfg-manager upload-channel:
Directory all the file paths are relative to.
List of channels the config info will be uploaded into channels delimited by ','. Example: --channel=foo,bar,baz.
Show help message and exit.
mgr-sync
should be used if SUSE Manager is connected to SUSE Customer Center (SCC).
With mgr-sync
you may add or synchronize products and channels.
The mgr-sync
command also enables and refreshes SCC data.
By default, log rotation of spacewalk-repo-sync
is disabled.
This logfile is run by a cron job, and managed by a configuration file.
You can enable log rotation by editing the configuration file at etc/sysconfig/rhn/reposync
and setting the MAX_DAYS
parameter.
This will permit the deletion of files that are older than the specified period of time.
mgr-sync
now requires the username/password of a SUSE Manager administrator.
Most functions are available as part of the public API.
mgr-sync provides a command structure with sub-commands similar to git or osc. For a complete list of command line option, see the mgr-sync manpage (man mgr-sync). Basic actions are:
mgr-sync list channel(s)|product(s)|credentials mgr-sync add channel(s)|product(s)|credentials mgr-sync delete credentials mgr-sync refresh [--refresh-channels] [--from-mirror MIRROR]
See the following examples.
mgr-sync list channels
mgr-sync add channel LABEL
mgr-sync list products
mgr-sync add product
mgr-sync refresh
mgr-sync refresh --refresh-channels
mgr-sync list credentials
mgr-sync add credentials
There can be one primary credential only. This is username/password used first when retrieving the list of available channels and packages.
mgr-sync add credentials --primary
mgr-sync delete credentials
SUSE Manager provides the smdba command for managing the installed database.
It is the successor of db-control
, which is now unsupported.
The smdba command works on local databases only, not remote. This utility allows you to do several administrative tasks like backing up and restoring the database. It also allows you to create, verify, restore backups, obtaining database status, and restart the database if necessary. The smdba command supports PostgreSQL.
Find basic information about smdba in the smdba manpage.
If you have stopped or restarted the database, Spacewalk services can lose their connections. In such a case, run the following command:
spacewalk-service restart
Depending on the database installed, smdba provides several subcommands:
backup-hot Enable continuous archiving backup backup-restore Restore the SUSE Manager Database from backup. backup-status Show backup status. db-start Start the SUSE Manager Database. db-status Show database status. db-stop Stop the SUSE Manager Database. space-overview Show database space report. space-reclaim Free disk space from unused object in tables and indexes. space-tables Show space report for each table. system-check Common backend healthcheck.
For a list of available commands on your particular appliance, call smdba help.
To display the help message for a specific subcommand, call smdba COMMAND help
.
There are three commands to start, stop, or get the status of the database. Use the following commands:
# smdba db-status Checking database core... online # smdba db-stop Stopping the SUSE Manager database... Stopping listener: done Stopping core: done # smdba db-status Checking database core... offline # smdba db-start Starting listener: done Starting core... done
The mgr-create-bootstrap-repo
command is used on the SUSE Manager Server to create a new bootstrap repository.
Use the -l
option to list all available repositories:
# mgr-create-bootstrap-repo -l
You can then invoke the command with the appropriate repository name to create the bootstrap repository you require, for example:
# mgr-create-bootstrap-repo SLE-version-x86_64
[[at.clitools.createbootstraprepo.customchannels]] === Creating a Bootstrap Repository with Custom Channels
Custom channels are channels that have been created to manage any custom packages that an organization might require. To create a new bootstrap repository from a custom channel, use the [code]``mgr-create-bootstrap-repo`` command with the [code]``with-custom-channels`` option:
# mgr-create-bootstrap-repo --with-custom-channels
If you create a bootstrap repository that contains custom channels, and later attempt to rebuild with the mgr-create-bootstrap-repo
command, the custom channel information will remain in the bootstrap repository.
If you want to remove custom channel information from your bootstrap repository, you will need to use the flush
option when you rebuild:
# mgr-create-bootstrap-repo --flush
The following section will help you become more familiar with the spacecmd
command-line interface.
This interface is available for SUSE Manager, Satellite and Spacewalk servers.
spacecmd is written in Python and uses the XML-RPC API provided by the server.
Manage almost all aspects of SUSE Manager from the command line with spacecmd
Tab completion is available for all commands
Single commands can be passed to spacecmd without entering the interactive shell (excellent for shell scripts)
May also be accessed and used as an interactive shell
Advanced search methods are available for finding specific systems, thus removing the need to create system groups (nevertheless groups are still recommended)
Complete functionality through the Spacewalk API. Almost all commands that can be executed from the WebUI can be performed via the spacecmd command-line
The following section provides configuration tips for spacecmd.
Normally spacecmd prompts you for a username and password each time you attempt to login to the interactive shell. Alternatively you can configure spacecmd with a credentials file to avoid this requirement.
Create a hidden spacecmd directory in your home directory and set permissions:
mkdir ~/.spacecmd chmod 700 ~/.spacecmd
Create a config
file in ~/.spacecmd/
and provide proper permissions:
touch ~/.spacecmd/config chmod 600 ~/.spacecmd/config
Edit the config
file and add the following configuration lines. (You can use either localhost or the FQDN of your SUSE Manager server):
[spacecmd] server=FQDN-here username=username-here password=password-here
Check connectivity by entering spacecmd
as root:
# spacecmd
By default spacecmd prints server status messages during connection attempts.
These messages can cause a lot of clutter when parsing system lists.
The following alias will force spacecmd to use quiet mode thus preventing this behavior.
Add the following line to your ~/.bashrc
file:
alias spacecmd='spacecmd -q'
spacecmd help can be access by typing spacecmd -h --help
Usage: spacecmd [options] [command] Options: -u USERNAME, --username=USERNAME use this username to connect to the server -p PASSWORD, --password=PASSWORD use this password to connect to the server -s SERVER, --server=SERVER connect to this server [default: local hostname] --nossl use HTTP instead of HTTPS --nohistory do not store command history -y, --yes answer yes for all questions -q, --quiet print only error messages -d, --debug print debug messages (can be passed multiple times) -h, --help show this help message and exit
As root you can access available functions without entering the spacecmd shell:
# spacecmd -- help Documented commands (type help <topic>): ======================================== activationkey_addchildchannels org_trustdetails activationkey_addconfigchannels package_details activationkey_addentitlements package_listdependencies activationkey_addgroups package_listerrata activationkey_addpackages package_listinstalledsystems activationkey_clone package_listorphans activationkey_create package_remove activationkey_delete package_removeorphans activationkey_details package_search activationkey_diff repo_addfilters activationkey_disable repo_clearfilters activationkey_disableconfigdeployment repo_create ...
This section provides troubleshooting solutions when working with spacecmd
The support article associated with this issue may be located at:https://www.suse.com/support/kb/doc/?id=7018627
When creating a distribution with spacecmd it will automatically set localhost as the server name instead of the FQDN of SUSE Manager. This will result in the following kernel option being written:
install=http://localhost/ks/dist/<distributionname>
Set the FQDN in $HOME/.spacecmd/config
like the following:
test:~/.spacecmd # cat config [spacecmd] server=test.mytest.env username=admin password=password nossl=0
This problem may be experienced if $HOME/.spacecmd/config
has been created and the server name option was set to localhost.
When running spacecmd
non-interactively, you must escape arguments passed to the command.
Always put --
before arguments, to avoid them being treated as global arguments.
Additionally, make sure you escape any quotes that you pass to the functions so that they are not interpreted.
An example of a well-formed spacecmd
command:
spacecmd -s server1 -- softwarechannel_create -n \'My Channel\' -l channel1 -a x86_64
The spacecmd
command keeps a cache of the various systems and packages that you have installed.
Sometimes, this can result in a mismatch between the system name and the system ID.
To clear the spacecmd
cache, use this command:
spacecmd clear_caches
The following sections provide descriptions for all documented spacecmd commands. Each command is grouped by the function prefix. Keep in mind that all commands may also be called using scripts and passed to spacecmd as stand-alone commands.
The following spacecmd commands are available for use with activation keys.
Add child channels to an activation key.
usage: activationkey_addchildchannels KEY <CHANNEL ...>
Add configuration channels to an activation key.
usage: activationkey_addconfigchannels KEY <CHANNEL ...> [options] options: -t add channels to the top of the list -b add channels to the bottom of the list
Add available entitlements to an activation key.
In the WebUI entitlements are known as System Types. Nevertheless the spacecmd backend still utilizes the entitlements term. Therefore any scripts you may be using can remain unchanged.
usage: activationkey_addentitlements KEY <ENTITLEMENT ...>
Add existing groups to an activation key.
usage: activationkey_addgroups KEY <GROUP ...>
Add packages to an activation key.
usage: activationkey_addpackages KEY <PACKAGE ...>
Clone an existing activation key.
usage examples: activationkey_clone foo_key -c bar_key activationkey_clone foo_key1 foo_key2 -c prefix activationkey_clone foo_key -x "s/foo/bar" activationkey_clone foo_key1 foo_key2 -x "s/foo/bar" options: -c CLONE_NAME : Name of the resulting key, treated as a prefix for multiple keys -x "s/foo/bar" : Optional regex replacement, replaces foo with bar in the clone description, base-channel label, child-channel labels, config-channel names
Create a new activation key.
usage: activationkey_create [options] options: -n NAME -d DESCRIPTION -b BASE_CHANNEL -u set key as universal default -e [enterprise_entitled,virtualization_host]
Delete an existing activation key.
usage: activationkey_delete KEY
Show details of an existing activation key.
usage: activationkey_details KEY ...
Check the difference between two activation keys.
usage: activationkey_diff SOURCE_ACTIVATIONKEY TARGET_ACTIVATIONKEY
Disable an existing activation key.
usage: activationkey_disable KEY [KEY ...]
Disable configuration channel deployment for an existing activation key.
usage: activationkey_disableconfigdeployment KEY
Enable an existing activation key.
usage: activationkey_enable KEY [KEY ...]
Enable configuration channel deployment for an existing activation key.
usage: activationkey_enableconfigdeployment KEY
Export activation key(s) to a JSON formatted file.
usage: activationkey_export [options] [<KEY> ...] options: -f outfile.json : specify an output filename, defaults to <KEY>.json if exporting a single key, akeys.json for multiple keys, or akey_all.json if no KEY specified (export ALL) Note : KEY list is optional, default is to export ALL keys
Import activation key(s) from JSON file(s)
usage: activationkey_import <JSONFILE ...>
List all existing activation keys.
usage: activationkey_list
List the base channel associated with an activation key.
usage: activationkey_listbasechannel KEY
List child channels associated with an activation key.
usage: activationkey_listchildchannels KEY
List configuration channels associated with an activation key.
usage: activationkey_listconfigchannels KEY
List entitlements associated with an activation key.
usage: activationkey_listentitlements KEY
List groups associated with an activation key
usage: activationkey_listgroups KEY
List packages associated with an activation key.
usage: activationkey_listpackages KEY
List systems registered with an activation key.
usage: activationkey_listsystems KEY
Remove child channels from an activation key.
usage: activationkey_removechildchannels KEY <CHANNEL ...>
Remove configuration channels from an activation key.
usage: activationkey_removeconfigchannels KEY <CHANNEL ...>
Remove entitlements from an activation key.
usage: activationkey_removeentitlements KEY <ENTITLEMENT ...>
Remove groups from an activation key.
usage: activationkey_removegroups KEY <GROUP ...>
Remove packages from an activation key.
usage: activationkey_removepackages KEY <PACKAGE ...>
Set the base channel for an activation key.
usage: activationkey_setbasechannel KEY CHANNEL
Set the ranked order of configuration channels.
usage: activationkey_setconfigchannelorder KEY
Set the contact method to use for systems registered with a specific key. (Use the XML-RPC API to access the latest contact methods.) The following contact methods are available for use with traditional spacecmd: ['default', 'ssh-push', 'ssh-push-tunnel']
usage: activationkey_setcontactmethod KEY CONTACT_METHOD
Add a description for an activation key.
usage: activationkey_setdescription KEY DESCRIPTION
Set a specific key as the universal default.
usage: activationkey_setuniversaldefault KEY
Using a universal default key is not a Best Practice recommendation.
Set the usage limit of an activation key, can be a number or "unlimited".
usage: activationkey_setbasechannel KEY <usage limit> usage: activationkey_setbasechannel KEY unlimited
Usage limits are only applicable to traditionally managed systems. Currently usage limits do not apply to Salt or foreign managed systems.
The following API command and its options are available for calling the XML-RPC API directly. Calling the API directly allows you to use the latest features in SUSE Manager from the command-line using spacecmd as a wrapper for stand-alone commands or used from within scripts.
spacecmd is the traditional tool for spacewalk.
It functions out of the box with SUSE Manager but you should know that latest features (for example, Salt) are often excluded from traditional spacecmd command-line tool.
To gain access to the latest feature additions call api api.getApiCallList
from within spacecmd to list all currently available API commands formatted in json.
You can then call these commands directly.
Call XML-RPC API with arguments directly.
usage: api [options] API_STRING options: -A, --args Arguments for the API other than session id in comma separated strings or JSON expression -F, --format Output format -o, --output Output file examples: api api.getApiCallList api --args "sysgroup_A" systemgroup.listSystems api -A "rhel-i386-server-5,2011-04-01,2011-05-01" -F "%(name)s" \ channel.software.listAllPackages
Clears the terminal screen
The following spacecmd commands are available for use with configuration channels.
Creates a configuration file.
usage: configchannel_addfile [CHANNEL] [options] options: -c CHANNEL -p PATH -r REVISION -o OWNER [default: root] -g GROUP [default: root] -m MODE [defualt: 0644] -x SELINUX_CONTEXT -d path is a directory -s path is a symlink -b path is a binary (or other file which needs base64 encoding) -t SYMLINK_TARGET -f local path to file contents Note re binary/base64: Some text files, notably those containing trailing newlines, those containing ASCII escape characters (or other charaters not allowed in XML) need to be sent as binary (-b). Some effort is made to auto- detect files which require this, but you may need to explicitly specify.
Backup a configuration channel.
usage: configchannel_backup CHANNEL [OUTDIR] OUTDIR defaults to $HOME/spacecmd-backup/configchannel/YYYY-MM-DD/CHANNEL
Clone configuration channel(s).
usage examples: configchannel_clone foo_label -c bar_label configchannel_clone foo_label1 foo_label2 -c prefix configchannel_clone foo_label -x "s/foo/bar" configchannel_clone foo_label1 foo_label2 -x "s/foo/bar" options: -c CLONE_LABEL : name/label of the resulting cc (note does not update description, see -x option), treated as a prefix if multiple keys are passed -x "s/foo/bar" : Optional regex replacement, replaces foo with bar in the clone name, label and description Note : If no -c or -x option is specified, interactive is assumed
Create a configuration channel.
usage: configchannel_create [options] options: -n NAME -l LABEL -d DESCRIPTION
Delete a configuration channel.
usage: configchannel_delete CHANNEL ...
Show the details of a configuration channel.
usage: configchannel_details CHANNEL ...
Find differences between configuration channels.
usage: configchannel_diff SOURCE_CHANNEL TARGET_CHANNEL
Export configuration channel(s) to a json formatted file.
usage: configchannel_export <CHANNEL>... [options] options: -f outfile.json : specify an output filename, defaults to <CHANNEL>.json if exporting a single channel, ccs.json for multiple channels, or cc_all.json if no CHANNEL specified e.g (export ALL) Note : CHANNEL list is optional, default is to export ALL
Show the details of a file in a configuration channel.
usage: configchannel_filedetails CHANNEL FILE [REVISION]
Forces a redeployment of files within a channel on all subscribed systems.
usage: configchannel_forcedeploy CHANNEL
Import configuration channel(s) from a json file.
usage: configchannel_import <JSONFILES...>
List all configuration channels.
usage: configchannel_list
List all files in a configuration channel.
usage: configchannel_listfiles CHANNEL ...
List all systems subscribed to a configuration channel.
usage: configchannel_listsystems CHANNEL
Remove configuration files.
usage: configchannel_removefile CHANNEL <FILE ...>
Sync configuration files between two configuration channels.
usage: configchannel_sync SOURCE_CHANNEL TARGET_CHANNEL
Update a configuration file.
usage: configchannel_updatefile CHANNEL FILE
Verify a configuration file.
usage: configchannel_verifyfile CHANNEL FILE <SYSTEMS> <SYSTEMS> may be substituted with any of the following targets: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
The following spacecmd commands are available for use with cryptographic keys.
Create a cryptographic key.
usage: cryptokey_create [options] options: -t GPG or SSL -d DESCRIPTION -f KEY_FILE
Delete a cryptographic key.
usage: cryptokey_delete NAME
Show the contents of a cryptographic key.
usage: cryptokey_details KEY ...
List all cryptographic keys (SSL, GPG).
usage: cryptokey_list
The following spacecmd commands are available for working with custom keys.
Create a custom key.
usage: custominfo_createkey [NAME] [DESCRIPTION]
Delete a custom key.
usage: custominfo_deletekey KEY ...
Show the details of a custom key.
usage: custominfo_details KEY ...
List all custom keys.
usage: custominfo_listkeys
Update a custom key.
usage: custominfo_updatekey [NAME] [DESCRIPTION]
The following spacecmd commands are available for working with kickstart distributions.
Create a Kickstart tree.
usage: distribution_create [options] options: -n NAME -p path to tree -b base channel to associate with -t install type [fedora|rhel_4/5/6|suse|generic_rpm]
Delete a Kickstart tree.
usage: distribution_delete LABEL
Show the details of a Kickstart tree.
usage: distribution_details LABEL
List the available autoinstall trees.
usage: distribution_list
Rename a Kickstart tree.
usage: distribution_rename OLDNAME NEWNAME
Update the path of a Kickstart tree.
usage: distribution_update NAME [options] options: -p path to tree -b base channel to associate with -t install type [fedora|rhel_4/5/6|suse|generic_rpm]
The following spacecmd commands are available for use with errata data.
Apply an patch to all affected systems.
usage: errata_apply ERRATA|search:XXX ...
Delete an patch.
usage: errata_delete ERRATA|search:XXX ...
Show the details of an patch.
usage: errata_details ERRATA|search:XXX ...
List errata addressing a CVE.
usage: errata_findbycve CVE-YYYY-NNNN ...
List all patches.
usage: errata_list
List of systems affected by an patch.
usage: errata_listaffectedsystems ERRATA|search:XXX ...
List of CVEs addressed by an patch.
usage: errata_listcves ERRATA|search:XXX ...
Publish an patch to a channel.
usage: errata_publish ERRATA|search:XXX <CHANNEL ...>
List patches that meet user provided criteria
usage: errata_search CVE|RHSA|RHBA|RHEA|CLA ... Example: > errata_search CVE-2009:1674 > errata_search RHSA-2009:1674
Print a summary of all errata.
usage: errata_summary
The following spacecmd commands are available for working with kickstart file preservation lists.
Create a file preservation list.
usage: filepreservation_create [NAME] [FILE ...]
Delete a file preservation list.
filepreservation_delete NAME
Show the details of a file preservation list.
usage: filepreservation_details NAME
List all file preservations.
usage: filepreservation_list
Display the API version of the server.
usage: get_apiversion
Print the expiration date of the server’s entitlement certificate.
usage: get_certificateexpiration
Display SUSE Manager server version.
usage: get_serverversion
Show the current session string.
usage: get_session
Add systems to a group.
usage: group_addsystems GROUP <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Backup a system group.
usage: group_backup NAME [OUTDIR] OUTDIR defaults to $HOME/spacecmd-backup/group/YYYY-MM-DD/NAME
Create a system group.
usage: group_create [NAME] [DESCRIPTION]
Delete a system group.
usage: group_delete NAME ...
Show the details of a system group.
usage: group_details GROUP ...
List available system groups.
usage: group_list
List the members of a group.
usage: group_listsystems GROUP
Remove systems from a group.
usage: group_removesystems GROUP <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Restore a system group.
usage: group_backup INPUTDIR [NAME] ...
List all available spacecmd commands with the help function.
Check for additional help on a specific function by calling for example:user_create
--help
.
Documented commands (type help <topic>): ======================================== activationkey_addchildchannels org_trustdetails activationkey_addconfigchannels package_details activationkey_addentitlements package_listdependencies activationkey_addgroups package_listerrata activationkey_addpackages package_listinstalledsystems activationkey_clone package_listorphans activationkey_create package_remove activationkey_delete package_removeorphans activationkey_details package_search activationkey_diff repo_addfilters activationkey_disable repo_clearfilters activationkey_disableconfigdeployment repo_create activationkey_enable repo_delete activationkey_enableconfigdeployment repo_details activationkey_export repo_list activationkey_import repo_listfilters activationkey_list repo_removefilters activationkey_listbasechannel repo_rename activationkey_listchildchannels repo_setfilters activationkey_listconfigchannels repo_updatessl activationkey_listentitlements repo_updateurl activationkey_listgroups report_duplicates activationkey_listpackages report_errata activationkey_listsystems report_inactivesystems activationkey_removechildchannels report_ipaddresses activationkey_removeconfigchannels report_kernels activationkey_removeentitlements report_outofdatesystems activationkey_removegroups report_ungroupedsystems activationkey_removepackages scap_getxccdfscandetails activationkey_setbasechannel scap_getxccdfscanruleresults activationkey_setconfigchannelorder scap_listxccdfscans activationkey_setcontactmethod scap_schedulexccdfscan activationkey_setdescription schedule_cancel activationkey_setuniversaldefault schedule_details activationkey_setusagelimit schedule_getoutput api schedule_list clear schedule_listarchived clear_caches schedule_listcompleted configchannel_addfile schedule_listfailed configchannel_backup schedule_listpending configchannel_clone schedule_reschedule configchannel_create snippet_create configchannel_delete snippet_delete configchannel_details snippet_details configchannel_diff snippet_list configchannel_export snippet_update configchannel_filedetails softwarechannel_adderrata configchannel_forcedeploy softwarechannel_adderratabydate configchannel_import softwarechannel_addpackages configchannel_list softwarechannel_addrepo configchannel_listfiles softwarechannel_clone configchannel_listsystems softwarechannel_clonetree configchannel_removefiles softwarechannel_create configchannel_sync softwarechannel_delete configchannel_updatefile softwarechannel_details configchannel_verifyfile softwarechannel_diff cryptokey_create softwarechannel_errata_diff cryptokey_delete softwarechannel_errata_sync cryptokey_details softwarechannel_getorgaccess cryptokey_list softwarechannel_list custominfo_createkey softwarechannel_listallpackages custominfo_deletekey softwarechannel_listbasechannels custominfo_details softwarechannel_listchildchannels custominfo_listkeys softwarechannel_listerrata custominfo_updatekey softwarechannel_listerratabydate distribution_create softwarechannel_listlatestpackages distribution_delete softwarechannel_listpackages distribution_details softwarechannel_listrepos distribution_list softwarechannel_listsyncschedule distribution_rename softwarechannel_listsystems distribution_update softwarechannel_mirrorpackages errata_apply softwarechannel_regenerateneededcache errata_delete softwarechannel_regenerateyumcache errata_details softwarechannel_removeerrata errata_findbycve softwarechannel_removepackages errata_list softwarechannel_removerepo errata_listaffectedsystems softwarechannel_removesyncschedule errata_listcves softwarechannel_setorgaccess errata_publish softwarechannel_setsyncschedule errata_search softwarechannel_sync errata_summary softwarechannel_syncrepos filepreservation_create ssm_add filepreservation_delete ssm_clear filepreservation_details ssm_intersect filepreservation_list ssm_list get_apiversion ssm_remove get_certificateexpiration system_addchildchannels get_serverversion system_addconfigchannels get_session system_addconfigfile group_addsystems system_addcustomvalue group_backup system_addentitlements group_create system_addnote group_delete system_applyerrata group_details system_comparepackageprofile group_list system_comparepackages group_listsystems system_comparewithchannel group_removesystems system_createpackageprofile group_restore system_delete help system_deletecrashes history system_deletenotes kickstart_addactivationkeys system_deletepackageprofile kickstart_addchildchannels system_deployconfigfiles kickstart_addcryptokeys system_details kickstart_addfilepreservations system_getcrashfiles kickstart_addoption system_installpackage kickstart_addpackages system_list kickstart_addscript system_listbasechannel kickstart_addvariable system_listchildchannels kickstart_clone system_listconfigchannels kickstart_create system_listconfigfiles kickstart_delete system_listcrashedsystems kickstart_details system_listcrashesbysystem kickstart_diff system_listcustomvalues kickstart_disableconfigmanagement system_listentitlements kickstart_disableremotecommands system_listerrata kickstart_enableconfigmanagement system_listevents kickstart_enablelogging system_listhardware kickstart_enableremotecommands system_listinstalledpackages kickstart_export system_listnotes kickstart_getcontents system_listpackageprofiles kickstart_getsoftwaredetails system_listupgrades kickstart_getupdatetype system_lock kickstart_import system_reboot kickstart_import_raw system_removechildchannels kickstart_importjson system_removeconfigchannels kickstart_list system_removecustomvalues kickstart_listactivationkeys system_removeentitlement kickstart_listchildchannels system_removepackage kickstart_listcryptokeys system_rename kickstart_listcustomoptions system_runscript kickstart_listoptions system_schedulehardwarerefresh kickstart_listpackages system_schedulepackagerefresh kickstart_listscripts system_search kickstart_listvariables system_setbasechannel kickstart_removeactivationkeys system_setconfigchannelorder kickstart_removechildchannels system_setcontactmethod kickstart_removecryptokeys system_show_packageversion kickstart_removefilepreservations system_syncpackages kickstart_removeoptions system_unlock kickstart_removepackages system_updatecustomvalue kickstart_removescript system_upgradepackage kickstart_removevariables toggle_confirmations kickstart_rename user_adddefaultgroup kickstart_setcustomoptions user_addgroup kickstart_setdistribution user_addrole kickstart_setlocale user_create kickstart_setpartitions user_delete kickstart_setselinux user_details kickstartsetupdatetype user_disable kickstart_updatevariable user_enable list_proxies user_list login user_listavailableroles logout user_removedefaultgroup org_addtrust user_removegroup org_create user_removerole org_delete user_setemail org_details user_setfirstname org_list user_setlastname org_listtrusts user_setpassword org_listusers user_setprefix org_removetrust whoami org_rename whoamitalkingto Miscellaneous help topics: ========================== time systems ssm
List recent commands using the history
command.
spacecmd {SSM:0}> history 1 help 2 api 3 exit 4 help 5 time --help 6 quit 7 clear spacecmd {SSM:0}>
The following spacecmd functions are available for use with kickstart.
Add activation keys to a Kickstart profile.
usage: kickstart_addactivationkeys PROFILE <KEY ...>
Add a child channels to a Kickstart profile.
usage: kickstart_addchildchannels PROFILE <CHANNEL ...>
Add cryptography keys to a Kickstart profile.
usage: kickstart_addcryptokeys PROFILE <KEY ...>
Add file preservations to a Kickstart profile.
usage: kickstart_addfilepreservations PROFILE <FILELIST ...>
Set an option for a Kickstart profile.
usage: kickstart_addoption PROFILE KEY [VALUE]
Add packages to a Kickstart profile.
usage: kickstart_addpackages PROFILE <PACKAGE ...>
Add a script to a Kickstart profile.
usage: kickstart_addscript PROFILE [options] options: -p PROFILE -e EXECUTION_TIME ['pre', 'post'] -i INTERPRETER -f FILE -c execute in a chroot environment -t ENABLING_TEMPLATING
Add a variable to a Kickstart profile.
usage: kickstart_addvariable PROFILE KEY VALUE
Clone a Kickstart profile.
usage: kickstart_clone [options] options: -n NAME -c CLONE_NAME
Create a Kickstart profile.
usage: kickstart_create [options] options: -n NAME -d DISTRIBUTION -p ROOT_PASSWORD -v VIRT_TYPE ['none', 'para_host', 'qemu', 'xenfv', 'xenpv']
Delete kickstart profile(s).
usage: kickstart_delete PROFILE usage: kickstart_delete PROFILE1 PROFILE2 usage: kickstart_delete "PROF*"
Show the details of a Kickstart profile.
usage: kickstart_details PROFILE
List differences between two kickstart files.
usage: kickstart_diff SOURCE_CHANNEL TARGET_CHANNEL
Disable configuration management on a Kickstart profile.
usage: kickstart_disableconfigmanagement PROFILE
Disable remote commands on a Kickstart profile.
usage: kickstart_disableremotecommands PROFILE
Enable configuration management on a Kickstart profile.
usage: kickstart_enableconfigmanagement PROFILE
Enable logging for a Kickstart profile.
usage: kickstart_enablelogging PROFILE
Enable remote commands on a Kickstart profile.
usage: kickstart_enableremotecommands PROFILE
Export kickstart profile(s) to json formatted file.
usage: kickstart_export <KSPROFILE>... [options] options: -f outfile.json : specify an output filename, defaults to <KSPROFILE>.json if exporting a single kickstart, profiles.json for multiple kickstarts, or ks_all.json if no KSPROFILE specified e.g (export ALL) Note : KSPROFILE list is optional, default is to export ALL
Show the contents of a Kickstart profile as they would be presented to a client.
usage: kickstart_getcontents LABEL
Gets kickstart profile software details.
usage: kickstart_getsoftwaredetails KS_LABEL usage: kickstart_getsoftwaredetails KS_LABEL KS_LABEL2 ...
Get the update type for a kickstart profile(s).
usage: kickstart_getupdatetype PROFILE usage: kickstart_getupdatetype PROFILE1 PROFILE2 usage: kickstart_getupdatetype "PROF*"
Import a Kickstart profile from a file.
usage: kickstart_import [options] options: -f FILE -n NAME -d DISTRIBUTION -v VIRT_TYPE ['none', 'para_host', 'qemu', 'xenfv', 'xenpv']
Import a raw Kickstart or autoyast profile from a file.
usage: kickstart_import_raw [options] options: -f FILE -n NAME -d DISTRIBUTION -v VIRT_TYPE ['none', 'para_host', 'qemu', 'xenfv', 'xenpv']
Import kickstart profile(s) from json file.
usage: kickstart_import <JSONFILES...>
List the available Kickstart profiles.
usage: kickstart_list
List the activation keys associated with a Kickstart profile.
usage: kickstart_listactivationkeys PROFILE
List the child channels of a Kickstart profile.
usage: kickstart_listchildchannels PROFILE
List the crypto keys associated with a Kickstart profile.
usage: kickstart_listcryptokeys PROFILE
List the custom options of a Kickstart profile.
usage: kickstart_listcustomoptions PROFILE
List the options of a Kickstart profile.
usage: kickstart_listoptions PROFILE
List the packages for a Kickstart profile.
usage: kickstart_listpackages PROFILE
List the scripts for a Kickstart profile.
usage: kickstart_listscripts PROFILE
List the variables of a Kickstart profile.
usage: kickstart_listvariables PROFILE
Remove activation keys from a Kickstart profile.
usage: kickstart_removeactivationkeys PROFILE <KEY ...>
Remove child channels from a Kickstart profile.
usage: kickstart_removechildchannels PROFILE <CHANNEL ...>
Remove crypto keys from a Kickstart profile.
usage: kickstart_removecryptokeys PROFILE <KEY ...>
Remove file preservations from a Kickstart profile.
usage: kickstart_removefilepreservations PROFILE <FILE ...>
Remove options from a Kickstart profile.
usage: kickstart_removeoptions PROFILE <OPTION ...>
Remove packages from a Kickstart profile.
usage: kickstart_removepackages PROFILE <PACKAGE ...>
Add a script to a Kickstart profile.
usage: kickstart_removescript PROFILE [ID]
Remove variables from a Kickstart profile.
usage: kickstart_removevariables PROFILE <KEY ...>
Rename a Kickstart profile
usage: kickstart_rename OLDNAME NEWNAME
Set custom options for a Kickstart profile.
usage: kickstart_setcustomoptions PROFILE
Set the distribution for a Kickstart profile.
usage: kickstart_setdistribution PROFILE DISTRIBUTION
Set the locale for a Kickstart profile.
usage: kickstart_setlocale PROFILE LOCALE
Set the partitioning scheme for a Kickstart profile.
usage: kickstart_setpartitions PROFILE
Set the SELinux mode for a Kickstart profile.
usage: kickstart_setselinux PROFILE MODE
Set the update type for a kickstart profile(s).
usage: kickstartsetupdatetype [options] KS_LABEL options: -u UPDATE_TYPE ['red_hat', 'all', 'none']
Update a variable in a Kickstart profile.
usage: kickstart_updatevariable PROFILE KEY VALUE
The following spacecmd function is available for listing proxies.
List the proxies within the user’s organization.
usage: list_proxies
The following spacecmd functions are available for use with organizations.
Add a trust between two organizations
usage: org_addtrust YOUR_ORG ORG_TO_TRUST
Create an organization.
usage: org_create [options] options: -n ORG_NAME -u USERNAME -P PREFIX (Dr., Mr., Miss, Mrs., Ms.) -f FIRST_NAME -l LAST_NAME -e EMAIL -p PASSWORD --pam enable PAM authentication
Delete an organization.
usage: org_delete NAME
Show the details of an organization.
usage: org_details NAME
List all organizations.
usage: org_list
List an organization’s trusts.
org_listtrusts NAME
List an organization’s users.
org_listusers NAME
Remove a trust between two organizations.
usage: org_removetrust YOUR_ORG TRUSTED_ORG
Rename an organization.
usage: org_rename OLDNAME NEWNAME
Show the details of an organizational trust.
usage: org_trustdetails TRUSTED_ORG
The following spacecmd functions are available for working with packages.
Show the details of a software package.
usage: package_details PACKAGE ...
List the dependencies for a package.
usage: package_listdependencies PACKAGE
List the errata that provide this package.
usage: package_listerrata PACKAGE ...
List the systems with a package installed.
usage: package_listinstalledsystems PACKAGE ...
List packages that are not in a channel.
usage: package_listorphans
Remove a package from SUSE Manager/Satellite
usage: package_remove PACKAGE ...
Remove packages that are not in a channel.
usage: package_removeorphans
Find packages that meet the given criteria.
usage: package_search NAME|QUERY Example: package_search kernel Advanced Search: Available Fields: name, epoch, version, release, arch, description, summary Example: name:kernel AND version:2.6.18 AND -description:devel
The following spacecmd functions are available for working with repositories.
Add filters for a user repository.
usage: repo_addfilters repo <filter ...>
Clears the filters for a user repository.
usage: repo_clearfilters repo
Create a user repository.
usage: repo_create <options> options: -n, --name name of repository -u, --url url of repository --ca SSL CA certificate (not required) --cert SSL Client certificate (not required) --key SSL Client key (not required)
Delete a user repository.
usage: repo_delete <repo ...>
Show the details of a user repository.
usage: repo_details <repo ...>
List all available user repository.
usage: repo_list
Show the filters for a user repository.
usage: repo_listfilters repo
Remove filters from a user repository.
usage: repo_removefilters repo <filter ...>
Rename a user repository.
usage: repo_rename OLDNAME NEWNAME
Set the filters for a user repo.
usage: repo_setfilters repo <filter ...>
Change the SSL certificates of a user repository.
usage: repo_updatessl <options> options: --ca SSL CA certificate (not required) --cert SSL Client certificate (not required) --key SSL Client key (not required)
Change the URL of a user repository.
usage: repo_updateurl <repo> <url>
The following spacecmd functions are available for working with reports.
List duplicate system profiles.
usage: report_duplicates
List all errata and how many systems they affect.
usage: report_errata [ERRATA|search:XXX ...]
List all inactive systems.
usage: report_inactivesystems [DAYS]
List the hostname and IP of each system.
usage: report_network [<SYSTEMS>] <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List the running kernel of each system.
usage: report_kernels [<SYSTEMS>] <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List all out-of-date systems.
usage: report_outofdatesystems
List all ungrouped systems.
usage: report_ungroupedsystems
The following spacecmd functions are available for working with OpenSCAP.
Get details of given OpenSCAP XCCDF scan.
usage: scap_getxccdfscandetails <XID>
Return a full list of RuleResults for given OpenSCAP XCCDF scan.
usage: scap_getxccdfscanruleresults <XID>
Return a list of finished OpenSCAP scans for given systems.
usage: scap_listxccdfscans <SYSTEMS>
Schedule Scap XCCDF scan.
usage: scap_schedulexccdfscan PATH_TO_XCCDF_FILE XCCDF_OPTIONS SYSTEMS Example: > scap_schedulexccdfscan '/usr/share/openscap/scap-security-xccdf.xml' 'profile Web-Default' \ system-scap.example.com
The following spacecmd functions are available for working with scheduling.
Cancel a scheduled action.
usage: schedule_cancel ID|* ...
Show the details of a scheduled action.
usage: schedule_details ID
Show the output from an action.
usage: schedule_getoutput ID
List all actions.
usage: schedule_list [BEGINDATE] [ENDDATE] Dates can be any of the following: Explicit Dates: Dates can be expressed as explicit date strings in the YYYYMMDD[HHMM] format. The year, month and day are required, while the hours and minutes are not; the hours and minutes will default to 0000 if no values are provided. Deltas: Dates can be expressed as delta values. For example, '2h' would mean 2 hours in the future. You can also use negative values to express times in the past (e.g., -7d would be one week ago). Units: s -> seconds m -> minutes h -> hours d -> days
List archived actions.
usage: schedule_listarchived [BEGINDATE] [ENDDATE] Dates can be any of the following: Explicit Dates: Dates can be expressed as explicit date strings in the YYYYMMDD[HHMM] format. The year, month and day are required, while the hours and minutes are not; the hours and minutes will default to 0000 if no values are provided. Deltas: Dates can be expressed as delta values. For example, '2h' would mean 2 hours in the future. You can also use negative values to express times in the past (e.g., -7d would be one week ago). Units: s -> seconds m -> minutes h -> hours d -> days
List completed actions.
Dates can be any of the following: Explicit Dates: Dates can be expressed as explicit date strings in the YYYYMMDD[HHMM] format. The year, month and day are required, while the hours and minutes are not; the hours and minutes will default to 0000 if no values are provided. Deltas: Dates can be expressed as delta values. For example, '2h' would mean 2 hours in the future. You can also use negative values to express times in the past (e.g., -7d would be one week ago). Units: s -> seconds m -> minutes h -> hours d -> days
List failed actions.
usage: schedule_listfailed [BEGINDATE] [ENDDATE] Dates can be any of the following: Explicit Dates: Dates can be expressed as explicit date strings in the YYYYMMDD[HHMM] format. The year, month and day are required, while the hours and minutes are not; the hours and minutes will default to 0000 if no values are provided. Deltas: Dates can be expressed as delta values. For example, '2h' would mean 2 hours in the future. You can also use negative values to express times in the past (e.g., -7d would be one week ago). Units: s -> seconds m -> minutes h -> hours d -> days
List pending actions.
usage: schedule_listpending [BEGINDATE] [ENDDATE] Dates can be any of the following: Explicit Dates: Dates can be expressed as explicit date strings in the YYYYMMDD[HHMM] format. The year, month and day are required, while the hours and minutes are not; the hours and minutes will default to 0000 if no values are provided. Deltas: Dates can be expressed as delta values. For example, '2h' would mean 2 hours in the future. You can also use negative values to express times in the past (e.g., -7d would be one week ago). Units: s -> seconds m -> minutes h -> hours d -> days
Reschedule failed actions.
usage: schedule_reschedule ID|* ...
The following spacecmd functions are available for working with Kickstart snippets.
Create a Kickstart snippet
usage: snippet_create [options] options: -n NAME -f FILE
Delete a Kickstart snippet.
usage: snippet_removefile NAME
Show the contents of a snippet.
usage: snippet_details SNIPPET ...
List the available Kickstart snippets.
usage: snippet_list
Update a Kickstart snippet.
usage: snippet_update NAME
The following spacecmd functions are available for working with software channels.
Add patches from one channel into another channel.
usage: softwarechannel_adderrata SOURCE DEST <ERRATA|search:XXX ...> Options: -q/--quick : Don't display list of packages (slightly faster) -s/--skip : Skip errata which appear to exist already in DEST
Add errata from one channel into another channel based on a date range.
usage: softwarechannel_adderratabydate [options] SOURCE DEST BEGINDATE ENDDATE Date format : YYYYMMDD Options: -p/--publish : Publish errata to the channel (don't clone)
Add packages to a software channel.
usage: softwarechannel_addpackages CHANNEL <PACKAGE ...>
Add a repo to a software channel.
usage: softwarechannel_addrepo CHANNEL REPO
Clone a software channel.
usage: softwarechannel_clone [options] options: -s SOURCE_CHANNEL -n NAME -l LABEL -p PARENT_CHANNEL --gpg-copy/-g (copy SOURCE_CHANNEL GPG details) --gpg-url GPG_URL --gpg-id GPG_ID --gpg-fingerprint GPG_FINGERPRINT -o do not clone any patches --regex/-x "s/foo/bar" : Optional regex replacement, replaces foo with bar in the clone name and label
Clone a software channel and its child channels.
usage: softwarechannel_clonetree [options]A e.g softwarechannel_clonetree foobasechannel -p "my_" softwarechannel_clonetree foobasechannel -x "s/foo/bar" softwarechannel_clonetree foobasechannel -x "s/^/my_" options: -s/--source-channel SOURCE_CHANNEL -p/--prefix PREFIX (is prepended to the label and name of all channels) --gpg-copy/-g (copy GPG details for correspondoing source channel)) --gpg-url GPG_URL (applied to all channels) --gpg-id GPG_ID (applied to all channels) --gpg-fingerprint GPG_FINGERPRINT (applied to all channels) -o do not clone any errata --regex/-x "s/foo/bar" : Optional regex replacement, replaces foo with bar in the clone name, label and description
Create a software channel.
usage: softwarechannel_create [options] options: -n NAME -l LABEL -p PARENT_CHANNEL -a ARCHITECTURE ['ia32', 'ia64', 'x86_64', 'ppc', 'i386-sun-solaris', 'sparc-sun-solaris'] -c CHECKSUM ['sha1', 'sha256', 'sha384', 'sha512'] -u GPG_URL -i GPG_ID -f GPG_FINGERPRINT
Delete a software channel.
usage: softwarechannel_delete <CHANNEL ...>
Show the details of a software channel.
usage: softwarechannel_details <CHANNEL ...>
Check the difference between software channels.
usage: softwarechannel_diff SOURCE_CHANNEL TARGET_CHANNEL
Check the difference between software channel files.
usage: softwarechannel_errata_diff SOURCE_CHANNEL TARGET_CHANNEL
Sync errata of two software channels.
usage: softwarechannel_errata_sync SOURCE_CHANNEL TARGET_CHANNEL
Get the org-access for the software channel.
usage : softwarechannel_getorgaccess : get org access for all channels usage : softwarechannel_getorgaccess <channel_label(s)> : get org access for specific channel(s)
List all available software channels.
usage: softwarechannel_list [options]' options: -v verbose (display label and summary) -t tree view (pretty-print child-channels)
List all packages in a channel.
usage: softwarechannel_listallpackages CHANNEL
List all base software channels.
usage: softwarechannel_listbasechannels [options] options: -v verbose (display label and summary)
List child software channels.
usage: softwarechannel_listchildchannels [options] softwarechannel_listchildchannels : List all child channels softwarechannel_listchildchannels CHANNEL : List children for a specific base channel options: -v verbose (display label and summary)
List the errata associated with a software channel.
usage: softwarechannel_listerrata <CHANNEL ...> [from=yyyymmdd [to=yyyymmdd]]
List errata from channelbased on a date range.
usage: softwarechannel_listerratabydate CHANNEL BEGINDATE ENDDATE Date format : YYYYMMDD
List the newest version of all packages in a channel.
usage: softwarechannel_listlatestpackages CHANNEL
List the most recent packages available from a software channel.
usage: softwarechannel_listpackages CHANNEL
List the repos for a software channel.
usage: softwarechannel_listrepos CHANNEL
List sync schedules for all software channels.
usage: softwarechannel_listsyncschedule : List all channels
List all systems subscribed to a software channel.
usage: softwarechannel_listsystems CHANNEL
Download packages of a given channel.
usage: softwarechannel_mirrorpackages CHANNEL Options: -l/--latest : Only mirror latest package version
Regenerate the needed errata and package cache for all systems.
usage: softwarechannel_regenerateneededcache
Regenerate the YUM cache for a software channel.
usage: softwarechannel_regenerateyumcache <CHANNEL ...>
Remove patches from a software channel.
usage: softwarechannel_removeerrata CHANNEL <ERRATA:search:XXX ...>
Remove packages from a software channel.
usage: softwarechannel_removepackages CHANNEL <PACKAGE ...>
Remove a repo from a software channel.
usage: softwarechannel_removerepo CHANNEL REPO
Removes the repo sync schedule for a software channel.
usage: softwarechannel_setsyncschedule <CHANNEL>
Set the org-access for the software channel.
usage : softwarechannel_setorgaccess <channel_label> [options] -d,--disable : disable org access (private, no org sharing) -e,--enable : enable org access (public access to all trusted orgs)
Sets the repo sync schedule for a software channel.
usage: softwarechannel_setsyncschedule <CHANNEL> <SCHEDULE> The schedule is specified in Quartz CronTrigger format without enclosing quotes. For example, to set a schedule of every day at 1am, <SCHEDULE> would be 0 0 1 * * ?
Sync the packages of two software channels.
usage: softwarechannel_sync SOURCE_CHANNEL TARGET_CHANNEL
Sync users repos for a software channel.
usage: softwarechannel_syncrepos <CHANNEL ...>
The following spacecmd functions are available for use with System Set Manager.
Add systems to the SSM.
usage: ssm_add <SYSTEMS> see 'help ssm' for more details <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Remove all systems from the SSM.
usage: ssm_clear
Replace the current SSM with the intersection of the current list of systems and the list of systems passed as arguments.
usage: ssm_intersect <SYSTEMS> see 'help ssm' for more details <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNE
List the systems currently in the SSM.
usage: ssm_list see 'help ssm' for more details
Remove systems from the SSM.
usage: ssm_remove <SYSTEMS> see 'help ssm' for more details <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
The following spacecmd functions are available for use with systems.
Add child channels to a system.
usage: system_addchildchannels <SYSTEMS> <CHANNEL ...> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Add config channels to a system.
usage: system_addconfigchannels <SYSTEMS> <CHANNEL ...> [options] options: -t add channels to the top of the list -b add channels to the bottom of the list <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Create a configuration file.
Note this is only for system sandbox or locally-managed files Centrally managed files should be created via configchannel_addfile usage: system_addconfigfile [SYSTEM] [options] options: -S/--sandbox : list only system-sandbox files -L/--local : list only locally managed files -p PATH -r REVISION -o OWNER [default: root] -g GROUP [default: root] -m MODE [defualt: 0644] -x SELINUX_CONTEXT -d path is a directory -s path is a symlink -b path is a binary (or other file which needs base64 encoding) -t SYMLINK_TARGET -f local path to file contents Note re binary/base64: Some text files, notably those containing trailing newlines, those containing ASCII escape characters (or other charaters not allowed in XML) need to be sent as binary (-b). Some effort is made to auto- detect files which require this, but you may need to explicitly specify.
Set a custom value for a system.
usage: system_addcustomvalue KEY VALUE <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Add entitlements to a system.
usage: system_addentitlements <SYSTEMS> ENTITLEMENT <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Set a note for a system.
usage: system_addnote <SYSTEM> [options] options: -s SUBJECT -b BODY <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Apply errata to a system.
usage: system_applyerrata <SYSTEMS> [ERRATA|search:XXX ...] <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Compare a system against a package profile.
usage: system_comparepackageprofile <SYSTEMS> PROFILE <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Compare the packages between two systems.
usage: system_comparepackages SOME_SYSTEM ANOTHER_SYSTEM
Compare the installed packages on a system with those in the channels it is registered to, or optionally some other channel.
usage: system_comparewithchannel <SYSTEMS> [options] options: -c/--channel : Specific channel to compare against, default is those subscribed to, including child channels <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Create a package profile.
usage: system_createpackageprofile SYSTEM [options] options: -n NAME -d DESCRIPTION
Delete a system profile.
usage: system_delete <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Delete crashes reported by spacewalk-abrt.
usage: Delete all crashes for all systems : system_deletecrashes [--verbose] usage: Delete all crashes for a single system: system_deletecrashes -i sys_id [--verbose] usage: Delete a single crash record : system_deletecrashes -c crash_id [--verbose]
Delete notes from a system.
usage: system_deletenotes <SYSTEM> <ID|*> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Delete a package profile.
usage: system_deletepackageprofile PROFILE
Deploy all configuration files for a system.
usage: system_deployconfigfiles <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Show the details of a system profile.
usage: system_details <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Download all files for a crash record.
usage: system_getcrashfiles -c crash_id [--verbose] usage: system_getcrashfiles -c crash_id [--dest_folder=/tmp/crash_files] [--verbose]
Install a package on a system.
usage: system_installpackage <SYSTEMS> <PACKAGE ...> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List all system profiles.
usage: system_list
List the base channel for a system.
usage: system_listbasechannel <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List the child channels for a system.
usage: system_listchildchannels <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List the config channels of a system.
usage: system_listconfigchannels <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List the managed config files of a system.
usage: system_listconfigfiles <SYSTEMS>' options: -s/--sandbox : list only system-sandbox files -l/--local : list only locally managed files -c/--central : list only centrally managed files -q/--quiet : quiet mode (omits the header) <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List all systems that have experienced a crash and reported by spacewalk-abrt.
usage: system_listcrashedsystems
List all reported crashes for a system.
usage: system_listcrashesbysystem -i sys_id
List the custom values for a system.
usage: system_listcustomvalues <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List the entitlements for a system.
usage: system_listentitlements <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List available errata for a system.
usage: system_listerrata <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List the event history for a system.
usage: system_listevents <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List the hardware details of a system.
usage: system_listhardware <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List the installed packages on a system.
usage: system_listinstalledpackages <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List the available notes for a system.
usage: system_listnotes <SYSTEM> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List all package profiles.
usage: system_listpackageprofiles
List the available upgrades for a system.
usage: system_listupgrades <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Lock a system.
usage: system_lock <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Reboot a system.
usage: system_reboot <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Remove child channels from a system.
usage: system_removechildchannels <SYSTEMS> <CHANNEL ...> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Remove config channels from a system.
usage: system_removeconfigchannels <SYSTEMS> <CHANNEL ...> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Remove a custom value for a system.
usage: system_removecustomvalues <SYSTEMS> <KEY ...> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Remove an entitlement from a system.
usage: system_removeentitlement <SYSTEMS> ENTITLEMENT <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Remove a package from a system.
usage: system_removepackage <SYSTEMS> <PACKAGE ...> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Rename a system profile.
usage: system_rename OLDNAME NEWNAME
Schedule a script to run on the list of systems provided.
usage: system_runscript <SYSTEMS> [options] options: -u USER -g GROUP -t TIMEOUT -s START_TIME -l LABEL -f FILE <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL Dates can be any of the following: Explicit Dates: Dates can be expressed as explicit date strings in the YYYYMMDD[HHMM] format. The year, month and day are required, while the hours and minutes are not; the hours and minutes will default to 0000 if no values are provided. Deltas: Dates can be expressed as delta values. For example, '2h' would mean 2 hours in the future. You can also use negative values to express times in the past (e.g., -7d would be one week ago). Units: s -> seconds m -> minutes h -> hours d -> days
Schedule a hardware refresh for a system.
usage: system_schedulehardwarerefresh <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Schedule a software package refresh for a system.
usage: system_schedulepackagerefresh <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
List systems that match the given criteria.
usage: system_search QUERY Available Fields: id name ip hostname device vendor driver uuid Examples: > system_search device:vmware > system_search ip:192.168.82
Set a system’s base software channel.
usage: system_setbasechannel <SYSTEMS> CHANNEL <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Set the ranked order of configuration channels.
usage: system_setconfigchannelorder <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Set the contact method for given system(s).
Available contact methods: ['default', 'ssh-push', 'ssh-push-tunnel'] usage: system_setcontactmethod <SYSTEMS> <CONTACT_METHOD> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Shows version of installed package on given system(s).
usage: system_show_packageversion <SYSTEM> <PACKAGE> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Sync packages between two systems.
usage: system_syncpackages SOURCE TARGET
Unlock a system.
usage: system_unlock <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Update a custom value for a system.
usage: system_updatecustomvalue KEY VALUE <SYSTEMS> <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Upgrade a package on a system.
usage: system_upgradepackage <SYSTEMS> <PACKAGE ...>|* <SYSTEMS> can be any of the following: name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
Toggle confirmation messages on/off.
usage: toggle_confirmations
Add a default group to an user account.
usage: user_adddefaultgroup USER <GROUP ...>
Add a group to an user account.
usage: user_addgroup USER <GROUP ...>
Add a role to an user account.
usage: user_addrole USER ROLE
Create an user.
usage: user_create [options] options: -u USERNAME -f FIRST_NAME -l LAST_NAME -e EMAIL -p PASSWORD --pam enable PAM authentication
Delete an user.
usage: user_delete NAME
Show the details of an user.
usage: user_details USER ...
Disable an user account.
usage: user_disable NAME
Enable an user account.
usage: user_enable NAME
List all users.
usage: user_list
List all available roles for users.
usage: user_listavailableroles
Remove a default group from an user account.
usage: user_removedefaultgroup USER <GROUP ...>
Remove a group to an user account.
usage: user_removegroup USER <GROUP ...>
Remove a role from an user account.
usage: user_removerole USER ROLE
Set an user accounts email field.
usage: user_setemail USER EMAIL
Set an user accounts first name field.
usage: user_setfirstname USER FIRST_NAME
Set an user accounts last name field.
usage: user_setlastname USER LAST_NAME
Set an user accounts name prefix field.
usage: user_setpassword USER PASSWORD
Set an user accounts name prefix field.
usage: user_setprefix USER PREFIX
The following command is available for returning the currently logged spacecmd username.
Print the currently logged spacecmd user.
spacecmd {SSM:0}> whoami admin
The following spacecmd function is available for returning the server hostname.
Return the server hostname that spacecmd is connected with.
spacecmd {SSM:0}> whoamitalkingto MGR_SERVER_HOSTNAME
The following help topics are printed with all functions requiring the relevant information.
Dates can be any of the following:
Explicit Dates: Dates can be expressed as explicit date strings in the YYYYMMDD[HHMM] format. The year, month and day are required, while the hours and minutes are not; the hours and minutes will default to 0000 if no values are provided. Deltas: Dates can be expressed as delta values. For example, '2h' would mean 2 hours in the future. You can also use negative values to express times in the past (e.g., -7d would be one week ago). Units: s -> seconds m -> minutes h -> hours d -> days
<SYSTEMS> can be any of the following:
name ssm (see 'help ssm') search:QUERY (see 'help system_search') group:GROUP channel:CHANNEL
The System Set Manager (SSM) is a group of systems that you
can perform tasks on as a group.
Adding Systems: > ssm_add group:rhel5-x86_64 > ssm_add channel:rhel-x86_64-server-5 > ssm_add search:device:vmware > ssm_add host.example.com Intersections: > ssm_add group:rhel5-x86_64 > ssm_intersect group:web-servers Using the SSM: > system_installpackage ssm zsh > system_runscript ssm
Some ports are only relevant if you actually run the related service on the SUSE Manager server.
Inbound / TCP/UDP / DHCP
Required when SUSE Manager is configured as a DHCP server for systems requesting IP addresses.
Inbound / TCP/UDP / TFTP
Used when SUSE Manager is configured as a PXE server and allows installation and re-installation of PXE-boot enabled systems.
Inbound / TCP / HTTP
Client and proxy server requests travel via HTTP or HTTPS.
Outbound / TCP / HTTP
Used to contact SUSE Customer Center/Novell Customer Center.
Inbound / TCP / HTTPS
All Web UI, client, and proxy server requests travel via HTTP or HTTPS.
Outbound / TCP / HTTPS
SUSE Manager uses this port to reach SUSE Customer Center (unless running in a disconnected mode with RMT or SMT-as described in Book “Best Practices”, Chapter 2 “Managing Your Subscriptions”, Section 2.2 “Disconnected Setup with RMT or SMT (DMZ)”).
Inbound / TCP / osad
When you wish to push actions to clients this port is required by the osad
daemon running on your client systems.
Inbound/Outbound / TCP / jabberd
Needed if you push actions to or via a SUSE Manager Proxy.
Inbound / TCP / salt
Required by the Salt-master to accept communication requests via TCP from minions. The connection is initiated by the minion and remains open to allow the master to send commands. This port uses a publish/subscribe topology; the minion subscribes to notifications from the master.
Inbound / TCP / salt
Required by the Salt-master to accept communication requests via TCP from minions. The connection is initiated by the minion and is open only when needed. Usually, minions will open this port when they have to report results to the master, such as when a command received on port 4505 has finished. This port uses a request/response topology; the minion sends requests to the master.
TCP
For cobbler.
Internal /
Satellite-search API, used by the RHN application in Tomcat and Taskomatic.
Internal /
Taskomatic API, used by the RHN application in Tomcat.
Internal
Auditlog-keeper to database.
Internal
Auditlog-keeper API, used by the RHN application in Tomcat.
Internal
Tomcat shutdown port.
Internal
Tomcat to Apache HTTPD (AJP).
Internal
Tomcat to Apache HTTPD (HTTP).
Internal
Salt-API, used by the RHN application in Tomcat and Taskomatic.
Internal / TCP
Port for a TCP connection to the Java Virtual Machine (JVM) that runs Taskomatic and the search (satellite-search).
Anything from port 32768 on (more exactly, what you can see with cat /proc/sys/net/ipv4/ip_local_port_range
) is an ephemeral port, typically used as the receiving end of a TCP connection.
So if process A opens a TCP connection to process B (for example, port 22), then A chooses an arbitrary source TCP port to match with destination port 22.
This image is a graphical representation of the ports used in SUSE Manager:
Port 80 (http) is not used to serve the Web UI, and is closed in most installations. Port 80 is used temporarily for some bootstrap repositories and automated installations.
Inbound /
Required when using ssh-push or ssh-push-tunnel contact methods. Check-in on clients connected to a SUSE Manager Proxy will be initiated on the SUSE Manager Server and “hop through” through to clients.
Outbound /
Used to reach SUSE Manager.
Inbound / TCP
For push actions and connections issued by osad
running on the client systems.
Inbound/Outbound / TCP
For push actions with the server.
The AutoYaST profile in this section installs a SUSE Linux Enterprise Server system with all default installation options including a default network configuration using DHCP. After the installation is finished, a bootstrap script located on the SUSE Manager server is executed in order to register the freshly installed system with SUSE Manager. You need to adjust the IP address of the SUSE Manager server, the name of the bootstrap script, and the root password according to your environment:
<user> ... <username>root</username> <user_password>`linux`</user_password> </user> <location>http://`192.168.1.1`/pub/bootstrap/`my_bootstrap.sh`</location>
The complete AutoYaST file:
<?xml version="1.0"?> <!DOCTYPE profile> <profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns"> <general> <mode> <confirm config:type="boolean">false</confirm> </mode> </general> <networking> <keep_install_network config:type="boolean">true</keep_install_network> </networking> <software> <install_recommended config:type="boolean">true</install_recommended> <patterns config:type="list"> <pattern>base</pattern> </patterns> </software> <users config:type="list"> <user> <encrypted config:type="boolean">false</encrypted> <fullname>root</fullname> <gid>0</gid> <home>/root</home> <password_settings> <expire></expire> <flag></flag> <inact></inact> <max></max> <min></min> <warn></warn> </password_settings> <shell>/bin/bash</shell> <uid>0</uid> <username>root</username> <user_password>linux</user_password> </user> </users> <scripts> <init-scripts config:type="list"> <script> <interpreter>shell</interpreter> <location>http://192.168.1.1/pub/bootstrap/my_bootstrap.sh</location> </script> </init-scripts> </scripts> </profile>
Use this enhancement fragment to add child channels:
<add-on> <add_on_products config:type="list"> <listentry> <ask_on_error config:type="boolean">true</ask_on_error> <media_url>http://$c_server/ks/dist/child/`channel-label`/`distribution-label`</media_url> <name>$c_name</name> <product>$c_product</product> <product_dir>/</product_dir> </listentry> ... </add_on_products> </add-on>
Replace channel-label
and distribution-label
with the correct labels (such as sles11-sp1-updates-x86_64
and sles11-sp2-x86_64
). Ensure that the distribution label corresponds to the Autoinstallable Distribution. Set the variables (such as $c_server
) according to your environment.
For information about variables, see
Book “Reference Manual”, Chapter 7 “Systems”, Section 7.18 “Autoinstallation > Distributions”, Section 7.18.1 “Autoinstallation > Distributions > Variables”.
It is required that you add the updates tools channel to the <add-on> AutoYaST snippet section.
This ensures your systems are provided with an up-to-date version of the libzypp
package.
If you do not include the updates tools channel, you will encounter 400
errors.
In this example, the (DISTRIBUTION_NAME) is replaced with the name of the autoinstallation distribution, as created previously, from › ›
<listentry> <ask_on_error config:type="boolean">true</ask_on_error> <media_url>http://$redhat_management_server/ks/dist/child/sles12-sp2-updates-x86_64/(DISTRIBUTION_NAME)</media_url> <name>sles12 sp2 updates</name> <product>SLES12</product> <product_dir>/</product_dir> </listentry>
This appendix contains the GNU Free Documentation License version 1.2.
Copyright © 2000, 2001, 2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document’s overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work’s title, preceding the beginning of the body of the text.
A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document’s license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
State on the Title page the name of the publisher of the Modified Version, as the publisher.
Preserve all the copyright notices of the Document.
Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document’s license notice.
Include an unaltered copy of this License.
Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.
Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.
Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version’s license notice. These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation’s users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document’s Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.
Copyright (c) YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled{ldquo}GNU Free Documentation License{rdquo}.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “ with…Texts.” line with this:
with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.