38 Post Installation Tasks #
When you have completed your cloud deployment, these are some of the common post-installation tasks you may need to perform to verify your cloud installation.
Manually back up /etc/group
on the Cloud Lifecycle Manager. It may be
useful for an emergency recovery.
38.1 API Verification #
SUSE OpenStack Cloud 9 provides a tool (Tempest) that you can use to verify that your cloud deployment completed successfully:
38.1.1 Prerequisites #
The verification tests rely on you having an external network setup and a cloud image in your image (glance) repository. Run the following playbook to configure your cloud:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts ardana-cloud-configure.yml
In SUSE OpenStack Cloud 9, the EXT_NET_CIDR setting for the external network is now specified in the input model - see Section 6.16.2.2, “neutron-external-networks”.
38.1.2 Tempest Integration Tests #
Tempest is a set of integration tests for OpenStack API validation, scenarios, and other specific tests to be run against a live OpenStack cluster. In SUSE OpenStack Cloud 9, Tempest has been modeled as a service and this gives you the ability to locate Tempest anywhere in the cloud. It is recommended that you install Tempest on your Cloud Lifecycle Manager node - that is where it resides by default in a new installation.
A version of the upstream Tempest integration tests is pre-deployed on the Cloud Lifecycle Manager node. For details on what Tempest is testing, you can check the contents of this file on your Cloud Lifecycle Manager:
/opt/stack/tempest/run_filters/ci.txt
You can use these embedded tests to verify if the deployed cloud is functional.
For more information on running Tempest tests, see Tempest - The OpenStack Integration Test Suite.
Running these tests requires access to the deployed cloud's identity admin credentials
Tempest creates and deletes test accounts and test resources for test purposes.
In certain cases Tempest might fail to clean-up some of test resources after a test is complete, for example in case of failed tests.
38.1.3 Running the Tests #
To run the default set of Tempest tests:
Log in to the Cloud Lifecycle Manager.
Ensure you can access your cloud:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts cloud-client-setup.yml source /etc/environment
Run the tests:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts tempest-run.yml
Optionally, you can Section 38.1.5, “Customizing the Test Run”.
38.1.4 Viewing Test Results #
Tempest is deployed under /opt/stack/tempest
. Test
results are written in a log file in the following directory:
/opt/stack/tempest/logs
A detailed log file is written to:
/opt/stack/tempest/logs/testr_results_region1.log
If you encounter an error saying "local variable 'run_subunit_content'
referenced before assignment", you may need to log in as the
tempest
user to run this command. This is due to a known
issue reported at
https://bugs.launchpad.net/testrepository/+bug/1348970.
See Test Repository Users Manual for more details on how to manage the test result repository.
38.1.5 Customizing the Test Run #
There are several ways available to customize which tests will be executed.
38.1.6 Run Tests for Specific Services and Exclude Specific Features #
Tempest allows you to test specific services and features using the
tempest.conf
configuration file.
A working configuration file with inline documentation is deployed under
/opt/stack/tempest/configs/
.
To use this, follow these steps:
Log in to the Cloud Lifecycle Manager.
Edit the
/opt/stack/tempest/configs/tempest_region1.conf
file.To test specific service, edit the
[service_available]
section and clear the comment character#
and set a line totrue
to test that service orfalse
to not test that service.cinder = true neutron = false
To test specific features, edit any of the
*_feature_enabled
sections to enable or disable tests on specific features of a service.[volume-feature-enabled] [compute-feature-enabled] [identity-feature-enabled] [image-feature-enabled] [network-feature-enabled] [object-storage-feature-enabled]
#Is the v2 identity API enabled (boolean value) api_v2 = true #Is the v3 identity API enabled (boolean value) api_v3 = false
Then run tests normally
38.1.7 Run Tests Matching a Series of White and Blacklists #
You can run tests against specific scenarios by editing or creating a run filter file.
Run filter files are deployed under
/opt/stack/tempest/run_filters
.
Use run filters to whitelist or blacklist specific tests or groups of tests:
lines starting with # or empty are ignored
lines starting with
+
are whitelistedlines starting with
-
are blacklistedlines not matching any of the above conditions are blacklisted
If whitelist is empty, all available tests are fed to blacklist. If blacklist is empty, all tests from whitelist are returned.
Whitelist is applied first. The blacklist is executed against the set of tests returned by the whitelist.
To run whitelist and blacklist tests:
Log in to the Cloud Lifecycle Manager.
Make sure you can access the cloud:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts cloud-client-setup.yml source /etc/environment
Run the tests:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts tempest-run.yml -e run_filter <run_filter_name>
Note that the run_filter_name is the name of the run_filter file except for the extension. For instance, to run using the filter from the file /opt/stack/tempest/run_filters/ci.txt, use the following:
ansible-playbook -i hosts/verb_hosts tempest-run.yml -e run_filter=ci
Documentation on the format of white and black-lists is available at:
/opt/stack/tempest/bin/tests2skip.py
Example:
The following entries run API tests, exclude tests that are less relevant for deployment validation, such as negative, admin, cli and third-party (EC2) tests:
+tempest\.api\.* *[Aa]dmin.* *[Nn]egative.* - tempest\.cli.* - tempest\.thirdparty\.*
38.2 Verify the Object Storage (swift) Operations #
For information about verifying the operations, see Section 9.1, “Running the swift Dispersion Report”.
38.3 Uploading an Image for Use #
To create a Compute instance, you need to obtain an image that you can use. The Cloud Lifecycle Manager provides an Ansible playbook that will download a CirrOS Linux image, and then upload it as a public image to your image repository for use across your projects.
38.3.1 Running the Playbook #
Use the following command to run this playbook:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts glance-cloud-configure.yml -e proxy=<PROXY>
The table below shows the optional switch that you can use as part of this playbook to specify environment-specific information:
Switch | Description |
---|---|
|
Optional. If your environment requires a proxy for the internet, use this switch to specify the proxy information. |
38.3.2 How to Curate Your Own Images #
OpenStack has created a guide to show you how to obtain, create, and modify images that will be compatible with your cloud:
38.3.3 Using the python-glanceclient CLI to Create Images #
You can use the glanceClient on a machine accessible to your cloud or on your Cloud Lifecycle Manager where it is automatically installed.
The OpenStackClient allows you to create, update, list, and delete images as
well as manage your image member lists, which allows you to share access to
images across multiple tenants. As with most of the OpenStack CLI tools, you
can use the openstack help
command to get a full list of
commands as well as their syntax.
If you would like to use the --copy-from
option when
creating an image, you will need to have your Administrator enable the http
store in your environment using the instructions outlined at
Section 6.7.2, “Allowing the glance copy-from option in your environment”.
38.4 Creating an External Network #
You must have an external network set up to allow your Compute instances to reach the internet. There are multiple methods you can use to create this external network and we provide two of them here. The SUSE OpenStack Cloud installer provides an Ansible playbook that will create this network for use across your projects. We also show you how to create this network via the command line tool from your Cloud Lifecycle Manager.
38.4.1 Using the Ansible Playbook #
This playbook will query the Networking service for an existing external
network, and then create a new one if you do not already have one. The
resulting external network will have the name ext-net
with a subnet matching the CIDR you specify in the command below.
If you need to specify more granularity, for example specifying an allocation pool for the subnet then you should utilize the Section 38.4.2, “Using the OpenStackClient CLI”.
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts neutron-cloud-configure.yml -e EXT_NET_CIDR=<CIDR>
The table below shows the optional switch that you can use as part of this playbook to specify environment-specific information:
Switch | Description |
---|---|
|
Optional. You can use this switch to specify the external network CIDR. If you do not use this switch, or use a wrong value, the VMs will not be accessible over the network.
This CIDR will be from the Note If this option is not defined the default value is "172.31.0.0/16" |
38.4.2 Using the OpenStackClient CLI #
For more granularity you can utilize the OpenStackClient to create your external network.
Log in to the Cloud Lifecycle Manager.
Source the Admin credentials:
source ~/service.osrc
Create the external network and then the subnet using these commands below.
Creating the network:
ardana >
openstack network create --router:external <external-network-name>Creating the subnet:
ardana >
openstack subnet create <external-network-name> <CIDR> --gateway <gateway> \ --allocation-pool start=<IP_start>,end=<IP_end> [--disable-dhcp]Where:
Value Description external-network-name This is the name given to your external network. This is a unique value that you will choose. The value
ext-net
is usually used.CIDR You can use this switch to specify the external network CIDR. If you choose not to use this switch, or use a wrong value, the VMs will not be accessible over the network.
This CIDR will be from the EXTERNAL VM network.
--gateway Optional switch to specify the gateway IP for your subnet. If this is not included then it will choose the first available IP.
--allocation-pool start end Optional switch to specify start and end IP addresses to use as the allocation pool for this subnet.
--disable-dhcp Optional switch if you want to disable DHCP on this subnet. If this is not specified, DHCP will be enabled.
38.4.3 Next Steps #
Once the external network is created, users can create a Private Network to complete their networking setup.