Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE OpenStack Cloud 8

33 Support Edit source

Find solutions for the most common pitfalls and technical details on how to create a support request for SUSE OpenStack Cloud here.

33.1 FAQ Edit source

1. Node Deployment

Q: How to Disable the YaST Installer Self-Update when deploying nodes?

Prior to starting an installation, the YaST installer can update itself if respective updates are available. By default this feature is enabled. In case of problems with this feature, disable it as follows:

  1. Open ~/openstack/ardana/ansible/roles/cobbler/templates/sles.grub.j2 with an editor and add self_update=0 to the line starting with linuxefi. The results needs to look like the following:

    linuxefi images/{{ sles_profile_name }}-x86_64/linux ifcfg={{ item[0] }}=dhcp install=http://{{ cobbler_server_ip_addr }}:79/cblr/ks_mirror/{{ sles_profile_name }} self_update=0 AutoYaST2=http://{{ cobbler_server_ip_addr }}:79/cblr/svc/op/ks/system/{{ item[1] }}
  2. Commit your changes:

    ardana > git commit -m "Disable Yast Self Update feature" \
  3. If you need to reenable the installer self-update, remove self_update=0 and commit the changes.

33.2 Support Edit source

Before contacting support to help you with a problem on your cloud, it is strongly recommended that you gather as much information about your system and the problem as possible. For this purpose, SUSE OpenStack Cloud ships with a tool called supportconfig. It gathers system information such as the current kernel version being used, the hardware, RPM database, partitions, and other items. supportconfig also collects the most important log files, making it easier for the supporters to identify and solve your problem.

It is recommended to always run supportconfig on the CLM Server and on the Control Node(s). If a Compute Node or a Storage Node is part of the problem, run supportconfig on the affected node as well. For details on how to run supportconfig, see https://documentation.suse.com/sles/12-SP5/single-html/SLES-admin/#cha-adm-support.

33.2.1 Applying PTFs (Program Temporary Fixes) Provided by the SUSE L3 Support Edit source

Under certain circumstances, the SUSE support may provide temporary fixes, the so-called PTFs, to customers with an L3 support contract. These PTFs are provided as RPM packages. To make them available on all nodes in SUSE OpenStack Cloud, proceed as follows. If you prefer to test them first on a single node, see Section 33.2.2, “ Testing PTFs (Program Temporary Fixes) on a Single Node ”.

  1. Download the packages from the location provided by the SUSE L3 Support to a temporary location on the CLM Server.

  2. Move the packages from the temporary download location to the following directories on the CLM Server:

    noarch packages (*.noarch.rpm):


    x86_64 packages (*.x86_64.rpm)


  3. Create or update the repository metadata:

    ardana > createrepo-cloud-ptf
  4. To deploy the updates, proceed as described in Section 13.3, “Cloud Lifecycle Manager Maintenance Update Procedure” and refresh the PTF repository before installing package updates on a node:

    ardana > zypper refresh -fr PTF

33.2.2 Testing PTFs (Program Temporary Fixes) on a Single Node Edit source

If you want to test a PTF (Program Temporary Fix) before deploying it on all nodes, if you want to verify that it fixes a certain issue, you can manually install the PTF on a single node.

In the following procedure, a PTF named venv-openstack-nova-x86_64-ptf.rpm, containing a fix for Nova, is installed on the Compute Node 01.

Procedure 33.1: Testing a Fix for Nova
  1. Check the version number of the package(s) that will be upgraded with the PTF. Run the following command on the deployer node:

    ardana > rpm -q venv-openstack-nova-x86_64
  2. Install the PTF on the deployer node:

    tux > sudo zypper up ./venv-openstack-nova-x86_64-ptf.rpm

    This will install a new TAR archive in /opt/ardana_packager/ardana-8/sles_venv/x86_64/.

  3. Register the TAR archive with the indexer:

    tux > sudo  create_index --dir \

    This will update the indexer /opt/ardana_packager/ardana-8/sles_venv/x86_64/packages.

  4. Deploy the fix on Compute Node 01:

    1. Check whether the fix can be deployed on a single Compute Node without updating the Control Nodes:

      ardana > cd ~/scratch/ansible/next/ardana/ansible
      ardana > ansible-playbook -i hosts/verb_hosts nova-upgrade.yml \
      --limit=inputmodel-ccp-compute0001-mgmt --list-hosts
    2. If the previous test passes, install the fix:

      ardana > ansible-playbook -i hosts/verb_hosts nova-upgrade.yml \
  5. Validate the fix, for example by logging in to the Compute Node to check the log files:

    ardana > ssh ardana@inputmodel-ccp-compute0001-mgmt
  6. In case your tests are positive, install the PTF on all nodes as described in Section 33.2.1, “ Applying PTFs (Program Temporary Fixes) Provided by the SUSE L3 Support ”.

    In case the test are negative uninstall the fix and restore the previous state of the Compute Node by running the following commands on the deployer node;

    tux > sudo zypper install --force venv-openstack-nova-x86_64-OLD-VERSION
    ardana > cd ~/scratch/ansible/next/ardana/ansible
    ardana > ansible-playbook -i hosts/verb_hosts nova-upgrade.yml \

    Make sure to replace OLD-VERSION with the version number you checked in the first step.

Print this page