Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Deployment Guide using Cloud Lifecycle Manager
  1. I Planning an Installation using Cloud Lifecycle Manager
    1. 1 Registering SLES
    2. 2 Hardware and Software Support Matrix
    3. 3 Recommended Hardware Minimums for the Example Configurations
    4. 4 High Availability
  2. II Cloud Lifecycle Manager Overview
    1. 5 Input Model
    2. 6 Configuration Objects
    3. 7 Other Topics
    4. 8 Configuration Processor Information Files
    5. 9 Example Configurations
    6. 10 Modifying Example Configurations for Compute Nodes
    7. 11 Modifying Example Configurations for Object Storage using Swift
    8. 12 Alternative Configurations
  3. III Pre-Installation
    1. 13 Overview
    2. 14 Pre-Installation Checklist
    3. 15 Installing the Cloud Lifecycle Manager server
    4. 16 Installing and Setting Up an SMT Server on the Cloud Lifecycle Manager server (Optional)
    5. 17 Software Repository Setup
    6. 18 Boot from SAN and Multipath Configuration
  4. IV Cloud Installation
    1. 19 Overview
    2. 20 Preparing for Stand-Alone Deployment
    3. 21 Installing with the Install UI
    4. 22 Using Git for Configuration Management
    5. 23 Installing a Stand-Alone Cloud Lifecycle Manager
    6. 24 Installing Mid-scale and Entry-scale KVM
    7. 25 DNS Service Installation Overview
    8. 26 Magnum Overview
    9. 27 Installing ESX Computes and OVSvAPP
    10. 28 Integrating NSX for vSphere
    11. 29 Installing Baremetal (Ironic)
    12. 30 Installation for SUSE OpenStack Cloud Entry-scale Cloud with Swift Only
    13. 31 Installing SLES Compute
    14. 32 Installing manila and Creating manila Shares
    15. 33 Installing SUSE CaaS Platform heat Templates
    16. 34 Installing SUSE CaaS Platform v4 using terraform
    17. 35 Integrations
    18. 36 Troubleshooting the Installation
    19. 37 Troubleshooting the ESX
  5. V Post-Installation
    1. 38 Post Installation Tasks
    2. 39 UI Verification
    3. 40 Installing OpenStack Clients
    4. 41 Configuring Transport Layer Security (TLS)
    5. 42 Configuring Availability Zones
    6. 43 Configuring Load Balancer as a Service
    7. 44 Other Common Post-Installation Tasks
  6. VI Support
    1. 45 FAQ
    2. 46 Support
    3. 47 Applying PTFs (Program Temporary Fixes) Provided by SUSE L3 Support
    4. 48 Testing PTFs (Program Temporary Fixes) on a Single Node
Applies to SUSE OpenStack Cloud 9

38 Post Installation Tasks Edit source

When you have completed your cloud deployment, these are some of the common post-installation tasks you may need to perform to verify your cloud installation.


Manually back up /etc/group on the Cloud Lifecycle Manager. It may be useful for an emergency recovery.

38.1 API Verification Edit source

SUSE OpenStack Cloud 9 provides a tool (Tempest) that you can use to verify that your cloud deployment completed successfully:

38.1.1 Prerequisites Edit source

The verification tests rely on you having an external network setup and a cloud image in your image (glance) repository. Run the following playbook to configure your cloud:

cd ~/scratch/ansible/next/ardana/ansible
ansible-playbook -i hosts/verb_hosts ardana-cloud-configure.yml

In SUSE OpenStack Cloud 9, the EXT_NET_CIDR setting for the external network is now specified in the input model - see Section, “neutron-external-networks”.

38.1.2 Tempest Integration Tests Edit source

Tempest is a set of integration tests for OpenStack API validation, scenarios, and other specific tests to be run against a live OpenStack cluster. In SUSE OpenStack Cloud 9, Tempest has been modeled as a service and this gives you the ability to locate Tempest anywhere in the cloud. It is recommended that you install Tempest on your Cloud Lifecycle Manager node - that is where it resides by default in a new installation.

A version of the upstream Tempest integration tests is pre-deployed on the Cloud Lifecycle Manager node. For details on what Tempest is testing, you can check the contents of this file on your Cloud Lifecycle Manager:


You can use these embedded tests to verify if the deployed cloud is functional.

For more information on running Tempest tests, see Tempest - The OpenStack Integration Test Suite.


Running these tests requires access to the deployed cloud's identity admin credentials

Tempest creates and deletes test accounts and test resources for test purposes.

In certain cases Tempest might fail to clean-up some of test resources after a test is complete, for example in case of failed tests.

38.1.3 Running the Tests Edit source

To run the default set of Tempest tests:

  1. Log in to the Cloud Lifecycle Manager.

  2. Ensure you can access your cloud:

    cd ~/scratch/ansible/next/ardana/ansible
    ansible-playbook -i hosts/verb_hosts cloud-client-setup.yml
    source /etc/environment
  3. Run the tests:

    cd ~/scratch/ansible/next/ardana/ansible
    ansible-playbook -i hosts/verb_hosts tempest-run.yml

Optionally, you can Section 38.1.5, “Customizing the Test Run”.

38.1.4 Viewing Test Results Edit source

Tempest is deployed under /opt/stack/tempest. Test results are written in a log file in the following directory:


A detailed log file is written to:


If you encounter an error saying "local variable 'run_subunit_content' referenced before assignment", you may need to log in as the tempest user to run this command. This is due to a known issue reported at https://bugs.launchpad.net/testrepository/+bug/1348970.

See Test Repository Users Manual for more details on how to manage the test result repository.

38.1.6 Run Tests for Specific Services and Exclude Specific Features Edit source

Tempest allows you to test specific services and features using the tempest.conf configuration file.

A working configuration file with inline documentation is deployed under /opt/stack/tempest/configs/.

To use this, follow these steps:

  1. Log in to the Cloud Lifecycle Manager.

  2. Edit the /opt/stack/tempest/configs/tempest_region1.conf file.

  3. To test specific service, edit the [service_available] section and clear the comment character # and set a line to true to test that service or false to not test that service.

    cinder = true
    neutron = false
  4. To test specific features, edit any of the *_feature_enabled sections to enable or disable tests on specific features of a service.

    #Is the v2 identity API enabled (boolean value)
    api_v2 = true
    #Is the v3 identity API enabled (boolean value)
    api_v3 = false
  5. Then run tests normally

38.1.7 Run Tests Matching a Series of White and Blacklists Edit source

You can run tests against specific scenarios by editing or creating a run filter file.

Run filter files are deployed under /opt/stack/tempest/run_filters.

Use run filters to whitelist or blacklist specific tests or groups of tests:

  • lines starting with # or empty are ignored

  • lines starting with + are whitelisted

  • lines starting with - are blacklisted

  • lines not matching any of the above conditions are blacklisted

If whitelist is empty, all available tests are fed to blacklist. If blacklist is empty, all tests from whitelist are returned.

Whitelist is applied first. The blacklist is executed against the set of tests returned by the whitelist.

To run whitelist and blacklist tests:

  1. Log in to the Cloud Lifecycle Manager.

  2. Make sure you can access the cloud:

    cd ~/scratch/ansible/next/ardana/ansible
    ansible-playbook -i hosts/verb_hosts cloud-client-setup.yml
    source /etc/environment
  3. Run the tests:

    cd ~/scratch/ansible/next/ardana/ansible
    ansible-playbook -i hosts/verb_hosts tempest-run.yml  -e run_filter <run_filter_name>

Note that the run_filter_name is the name of the run_filter file except for the extension. For instance, to run using the filter from the file /opt/stack/tempest/run_filters/ci.txt, use the following:

ansible-playbook -i hosts/verb_hosts tempest-run.yml -e run_filter=ci

Documentation on the format of white and black-lists is available at:



The following entries run API tests, exclude tests that are less relevant for deployment validation, such as negative, admin, cli and third-party (EC2) tests:

- tempest\.cli.*
- tempest\.thirdparty\.*

38.2 Verify the Object Storage (swift) Operations Edit source

For information about verifying the operations, see Book “Operations Guide CLM”, Chapter 9 “Managing Object Storage”, Section 9.1 “Running the swift Dispersion Report”.

38.3 Uploading an Image for Use Edit source

To create a Compute instance, you need to obtain an image that you can use. The Cloud Lifecycle Manager provides an Ansible playbook that will download a CirrOS Linux image, and then upload it as a public image to your image repository for use across your projects.

38.3.1 Running the Playbook Edit source

Use the following command to run this playbook:

cd ~/scratch/ansible/next/ardana/ansible
ansible-playbook -i hosts/verb_hosts glance-cloud-configure.yml -e proxy=<PROXY>

The table below shows the optional switch that you can use as part of this playbook to specify environment-specific information:


-e proxy="<proxy_address:port>"

Optional. If your environment requires a proxy for the internet, use this switch to specify the proxy information.

38.3.2 How to Curate Your Own Images Edit source

OpenStack has created a guide to show you how to obtain, create, and modify images that will be compatible with your cloud:

OpenStack Virtual Machine Image Guide

38.3.3 Using the python-glanceclient CLI to Create Images Edit source

You can use the glanceClient on a machine accessible to your cloud or on your Cloud Lifecycle Manager where it is automatically installed.

The OpenStackClient allows you to create, update, list, and delete images as well as manage your image member lists, which allows you to share access to images across multiple tenants. As with most of the OpenStack CLI tools, you can use the openstack help command to get a full list of commands as well as their syntax.

If you would like to use the --copy-from option when creating an image, you will need to have your Administrator enable the http store in your environment using the instructions outlined at Book “Operations Guide CLM”, Chapter 6 “Managing Compute”, Section 6.7 “Configuring the Image Service”, Section 6.7.2 “Allowing the glance copy-from option in your environment”.

38.4 Creating an External Network Edit source

You must have an external network set up to allow your Compute instances to reach the internet. There are multiple methods you can use to create this external network and we provide two of them here. The SUSE OpenStack Cloud installer provides an Ansible playbook that will create this network for use across your projects. We also show you how to create this network via the command line tool from your Cloud Lifecycle Manager.

38.4.1 Using the Ansible Playbook Edit source

This playbook will query the Networking service for an existing external network, and then create a new one if you do not already have one. The resulting external network will have the name ext-net with a subnet matching the CIDR you specify in the command below.

If you need to specify more granularity, for example specifying an allocation pool for the subnet then you should utilize the Section 38.4.2, “Using the OpenStackClient CLI”.

ardana > cd ~/scratch/ansible/next/ardana/ansible
ardana > ansible-playbook -i hosts/verb_hosts neutron-cloud-configure.yml -e EXT_NET_CIDR=<CIDR>

The table below shows the optional switch that you can use as part of this playbook to specify environment-specific information:



Optional. You can use this switch to specify the external network CIDR. If you do not use this switch, or use a wrong value, the VMs will not be accessible over the network.

This CIDR will be from the EXTERNAL VM network.


If this option is not defined the default value is ""

38.4.2 Using the OpenStackClient CLI Edit source

For more granularity you can utilize the OpenStackClient to create your external network.

  1. Log in to the Cloud Lifecycle Manager.

  2. Source the Admin credentials:

    source ~/service.osrc
  3. Create the external network and then the subnet using these commands below.

    Creating the network:

    ardana > openstack network create --router:external <external-network-name>

    Creating the subnet:

    ardana > openstack subnet create <external-network-name> <CIDR> --gateway <gateway> \
    --allocation-pool start=<IP_start>,end=<IP_end> [--disable-dhcp]



    This is the name given to your external network. This is a unique value that you will choose. The value ext-net is usually used.


    You can use this switch to specify the external network CIDR. If you choose not to use this switch, or use a wrong value, the VMs will not be accessible over the network.

    This CIDR will be from the EXTERNAL VM network.


    Optional switch to specify the gateway IP for your subnet. If this is not included then it will choose the first available IP.

    --allocation-pool start end

    Optional switch to specify start and end IP addresses to use as the allocation pool for this subnet.


    Optional switch if you want to disable DHCP on this subnet. If this is not specified, DHCP will be enabled.

38.4.3 Next Steps Edit source

Once the external network is created, users can create a Private Network to complete their networking setup.

Print this page