Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
ContentsContents
Deployment Guide using Cloud Lifecycle Manager
  1. I Planning an Installation using Cloud Lifecycle Manager
    1. 1 Registering SLES
    2. 2 Hardware and Software Support Matrix
    3. 3 Recommended Hardware Minimums for the Example Configurations
    4. 4 High Availability
  2. II Cloud Lifecycle Manager Overview
    1. 5 Input Model
    2. 6 Configuration Objects
    3. 7 Other Topics
    4. 8 Configuration Processor Information Files
    5. 9 Example Configurations
    6. 10 Modifying Example Configurations for Compute Nodes
    7. 11 Modifying Example Configurations for Object Storage using Swift
    8. 12 Alternative Configurations
  3. III Pre-Installation
    1. 13 Overview
    2. 14 Pre-Installation Checklist
    3. 15 Installing the Cloud Lifecycle Manager server
    4. 16 Installing and Setting Up an SMT Server on the Cloud Lifecycle Manager server (Optional)
    5. 17 Software Repository Setup
    6. 18 Boot from SAN and Multipath Configuration
  4. IV Cloud Installation
    1. 19 Overview
    2. 20 Preparing for Stand-Alone Deployment
    3. 21 Installing with the Install UI
    4. 22 Using Git for Configuration Management
    5. 23 Installing a Stand-Alone Cloud Lifecycle Manager
    6. 24 Installing Mid-scale and Entry-scale KVM
    7. 25 DNS Service Installation Overview
    8. 26 Magnum Overview
    9. 27 Installing ESX Computes and OVSvAPP
    10. 28 Integrating NSX for vSphere
    11. 29 Installing Baremetal (Ironic)
    12. 30 Installation for SUSE OpenStack Cloud Entry-scale Cloud with Swift Only
    13. 31 Installing SLES Compute
    14. 32 Installing manila and Creating manila Shares
    15. 33 Installing SUSE CaaS Platform heat Templates
    16. 34 Installing SUSE CaaS Platform v4 using terraform
    17. 35 Integrations
    18. 36 Troubleshooting the Installation
    19. 37 Troubleshooting the ESX
  5. V Post-Installation
    1. 38 Post Installation Tasks
    2. 39 UI Verification
    3. 40 Installing OpenStack Clients
    4. 41 Configuring Transport Layer Security (TLS)
    5. 42 Configuring Availability Zones
    6. 43 Configuring Load Balancer as a Service
    7. 44 Other Common Post-Installation Tasks
  6. VI Support
    1. 45 FAQ
    2. 46 Support
    3. 47 Applying PTFs (Program Temporary Fixes) Provided by SUSE L3 Support
    4. 48 Testing PTFs (Program Temporary Fixes) on a Single Node
Navigation
Applies to SUSE OpenStack Cloud 9

36 Troubleshooting the Installation Edit source

We have gathered some of the common issues that occur during installation and organized them by when they occur during the installation. These sections will coincide with the steps labeled in the installation instructions.

36.1 Issues during Cloud Lifecycle Manager Setup Edit source

Issue: Running the ardana-init.bash script when configuring your Cloud Lifecycle Manager does not completeEdit source

Part of what the ardana-init.bash script does is install Git. So if your DNS server(s) is/are not specified in your /etc/resolv.conf file, is not valid, or is not functioning properly on your Cloud Lifecycle Manager, it will not be able to complete.

To resolve this issue, double check your nameserver in your /etc/resolv.conf file and then re-run the script.

36.2 Issues while Updating Configuration Files Edit source

Configuration Processor Fails Due to Wrong yml FormatEdit source

If you receive the error below when running the configuration processor then you may have a formatting error:

TASK: [fail msg="Configuration processor run failed, see log output above for
details"]

First you should check the Ansible log in the location below for more details on which yml file in your input model has the error:

~/.ansible/ansible.log

Check the configuration file to locate and fix the error, keeping in mind the following tips below.

Check your files to ensure that they do not contain the following:

  • Non-ascii characters

  • Unneeded spaces

Once you have fixed the formatting error in your files, commit the changes with these steps:

  1. Commit your changes to Git:

    cd ~/openstack/ardana/ansible
    git add -A
    git commit -m "My config or other commit message"
  2. Re-run the configuration processor playbook and confirm the error is not received again.

Configuration processor fails with provider network OCTAVIA-MGMT-NET errorEdit source

If you receive the error below when running the configuration processor then you have not correctly configured your VLAN settings for Octavia.

################################################################################,
# The configuration processor failed.
#   config-data-2.0           ERR: Provider network OCTAVIA-MGMT-NET host_routes:
# destination '192.168.10.0/24' is not defined as a Network in the input model.
# Add 'external: True' to this host_route if this is for an external network.
################################################################################

To resolve the issue, ensure that your settings in ~/openstack/my_cloud/definition/data/neutron/neutron_config.yml are correct for the VLAN setup for Octavia.

Changes Made to your Configuration FilesEdit source

If you have made corrections to your configuration files and need to re-run the Configuration Processor, the only thing you need to do is commit your changes to your local Git repository:

cd ~/openstack/ardana/ansible
git add -A
git commit -m "commit message"

You can then re-run the configuration processor:

cd ~/openstack/ardana/ansible
ansible-playbook -i hosts/localhost config-processor-run.yml
Configuration Processor Fails Because Encryption Key Does Not Meet RequirementsEdit source

If you choose to set an encryption password when running the configuration processor, you may receive the following error if the chosen password does not meet the complexity requirements:

################################################################################
# The configuration processor failed.
#   encryption-key ERR: The Encryption Key does not meet the following requirement(s):
#       The Encryption Key must be at least 12 characters
#       The Encryption Key must contain at least 3 of following classes of characters:
#                           Uppercase Letters, Lowercase Letters, Digits, Punctuation
################################################################################

If you receive the above error, run the configuration processor again and select a password that meets the complexity requirements detailed in the error message:

cd ~/openstack/ardana/ansible
ansible-playbook -i hosts/localhost config-processor-run.yml

36.3 Issues while Deploying the Cloud Edit source

Issue: If the site.yml playbook fails, you can query the log for the reasonEdit source

Ansible is good about outputting the errors into the command line output, however if you would like to view the full log for any reason the location is:

~/.ansible/ansible.log

This log is updated real time as you run Ansible playbooks.

Tip
Tip

Use grep to parse through the log. Usage: grep <text> ~/.ansible/ansible.log

Issue: How to Wipe the Disks of your MachinesEdit source

If you have re-run the site.yml playbook, you may need to wipe the disks of your nodes

You should run the wipe_disks.yml playbook only after re-running the bm-reimage.yml playbook but before you re-run the site.yml playbook.

cd ~/scratch/ansible/next/ardana/ansible
ansible-playbook -i hosts/verb_hosts wipe_disks.yml

The playbook will show you the disks to be wiped in the output and allow you to confirm that you want to complete this action or abort it if you do not want to proceed. You can optionally use the --limit <NODE_NAME> switch on this playbook to restrict it to specific nodes. This action will not affect the OS partitions on the servers.

If you receive an error stating that osconfig has already run on your nodes then you will need to remove the /etc/ardana/osconfig-ran file on each of the nodes you want to wipe with this command:

sudo rm /etc/ardana/osconfig-ran

That will clear this flag and allow the disk to be wiped.

Error Received if Root Logical Volume is Too SmallEdit source

When running the site.yml playbook, you may receive a message that includes the error below if your root logical volume is too small. This error needs to be parsed out and resolved.

2015-09-29 15:54:03,022 p=26345 u=stack | stderr: New size given (7128 extents)
not larger than existing size (7629 extents)

The error message may also reference the root volume:

"name": "root", "size": "10%"

The problem here is that the root logical volume, as specified in the disks_controller.yml file, is set to 10% of the overall physical volume and this value is too small.

To resolve this issue you need to ensure that the percentage is set properly for the size of your logical-volume. The default values in the configuration files is based on a 500 GB disk, so if your logical volumes are smaller you may need to increase the percentage so there is enough room.

Multiple Keystone Failures Received during site.ymlEdit source

If you receive the keystone error below during your site.yml run then follow these steps:

TASK: [OPS-MON | _keystone_conf | Create Ops Console service in keystone] *****
failed:
[...]
msg: An unexpected error prevented the server from fulfilling your request.
(HTTP 500) (Request-ID: req-23a09c72-5991-4685-b09f-df242028d742), failed

FATAL: all hosts have already failed -- aborting

The most likely cause of this error is that the virtual IP address is having issues and the keystone API communication through the virtual IP address is not working properly. You will want to check the keystone log on the controller where you will likely see authorization failure errors.

Verify that your virtual IP address is active and listening on the proper port on all of your controllers using this command:

netstat -tplan | grep 35357

Ensure that your Cloud Lifecycle Manager did not pick the wrong (unusable) IP address from the list of IP addresses assigned to your Management network.

The Cloud Lifecycle Manager will take the first available IP address after the gateway-ip defined in your ~/openstack/my_cloud/definition/data/networks.yml file. This IP will be used as the virtual IP address for that particular network. If this IP address is used and reserved for another purpose outside of your SUSE OpenStack Cloud deployment then you will receive the error above.

To resolve this issue we recommend that you utilize the start-address and possibly the end-address (if needed) options in your networks.yml file to further define which IP addresses you want your cloud deployment to use. For more information, see Section 6.14, “Networks”.

After you have made changes to your networks.yml file, follow these steps to commit the changes:

  1. Ensuring that you stay within the ~/openstack directory, commit the changes you just made:

    cd ~/openstack
    git commit -a -m "commit message"
  2. Run the configuration processor:

    cd ~/openstack/ardana/ansible
    ansible-playbook -i hosts/localhost config-processor-run.yml
  3. Update your deployment directory:

    cd ~/openstack/ardana/ansible
    ansible-playbook -i hosts/localhost ready-deployment.yml
  4. Re-run the site.yml playbook:

    cd ~/scratch/ansible/next/ardana/ansible
    ansible-playbook -i hosts/verb_hosts site.yml
Print this page