Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE OpenStack Cloud 8

18 Installation for SUSE OpenStack Cloud Entry-scale Cloud with Swift Only Edit source

This page describes the installation step requirements for the SUSE OpenStack Cloud Entry-scale Cloud with Swift Only model.

18.1 Important Notes Edit source

  • For information about when to use the GUI installer and when to use the command line (CLI), see Chapter 1, Overview.

  • Review the Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 2 “Hardware and Software Support Matrix” that we have listed.

  • Review the release notes to make yourself aware of any known issues and limitations.

  • The installation process can occur in different phases. For example, you can install the control plane only and then add Compute nodes afterwards if you would like.

  • If you run into issues during installation, we have put together a list of Chapter 23, Troubleshooting the Installation you can reference.

  • Make sure all disks on the system(s) are wiped before you begin the install. (For Swift, refer to Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 11 “Modifying Example Configurations for Object Storage using Swift”, Section 11.6 “Swift Requirements for Device Group Drives”.)

  • There is no requirement to have a dedicated network for OS-install and system deployment, this can be shared with the management network. More information can be found in Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 9 “Example Configurations”.

  • The terms deployer and Cloud Lifecycle Manager are used interchangeably. They refer to the same nodes in your cloud environment.

  • When running the Ansible playbook in this installation guide, if a runbook fails you will see in the error response to use the --limit switch when retrying a playbook. This should be avoided. You can simply re-run any playbook without this switch.

  • DVR is not supported with ESX compute.

  • When you attach a Cinder volume to the VM running on the ESXi host, the volume will not get detected automatically. Make sure to set the image metadata vmware_adaptertype=lsiLogicsas for image before launching the instance. This will help to discover the volume change appropriately.

  • The installation process will create several OpenStack roles. Not all roles will be relevant for a cloud with Swift only, but they will not cause problems.

18.2 Before You Start Edit source

  1. Review the Chapter 2, Pre-Installation Checklist about recommended pre-installation tasks.

  2. Prepare the Cloud Lifecycle Manager node. The Cloud Lifecycle Manager must be accessible either directly or via ssh, and have SUSE Linux Enterprise Server 12 SP3 installed. All nodes must be accessible to the Cloud Lifecycle Manager. If the nodes do not have direct access to online Cloud subscription channels, the Cloud Lifecycle Manager node will need to host the Cloud repositories.

    1. If you followed the installation instructions for Cloud Lifecycle Manager server (see Chapter 3, Installing the Cloud Lifecycle Manager server), SUSE OpenStack Cloud software should already be installed. Double-check whether SUSE Linux Enterprise and SUSE OpenStack Cloud are properly registered at the SUSE Customer Center by starting YaST and running Software › Product Registration.

      If you have not yet installed SUSE OpenStack Cloud, do so by starting YaST and running Software › Product Registration › Select Extensions. Choose SUSE OpenStack Cloud and follow the on-screen instructions. Make sure to register SUSE OpenStack Cloud during the installation process and to install the software pattern patterns-cloud-ardana.

      tux > sudo zypper -n in patterns-cloud-ardana
    2. Ensure the SUSE OpenStack Cloud media repositories and updates repositories are made available to all nodes in your deployment. This can be accomplished either by configuring the Cloud Lifecycle Manager server as an SMT mirror as described in Chapter 4, Installing and Setting Up an SMT Server on the Cloud Lifecycle Manager server (Optional) or by syncing or mounting the Cloud and updates repositories to the Cloud Lifecycle Manager server as described in Chapter 5, Software Repository Setup.

    3. Configure passwordless sudo for the user created when setting up the node (as described in Section 3.4, “Creating a User”). Note that this is not the user ardana that will be used later in this procedure. In the following we assume you named the user cloud. Run the command visudo as user root and add the following line to the end of the file:

      CLOUD ALL = (root) NOPASSWD:ALL

      Make sure to replace CLOUD with your user name choice.

    4. Set the password for the user ardana:

      tux > sudo passwd ardana
    5. Become the user ardana:

      tux > su - ardana
    6. Place a copy of the SUSE Linux Enterprise Server 12 SP3 .iso in the ardana home directory, var/lib/ardana, and rename it to sles12sp3.iso.

    7. Install the templates, examples, and working model directories:

      ardana > /usr/bin/ardana-init

18.3 Setting Up the Cloud Lifecycle Manager Edit source

18.3.1 Installing the Cloud Lifecycle Manager Edit source

Running the ARDANA_INIT_AUTO=1 command is optional to avoid stopping for authentication at any step. You can also run ardana-initto launch the Cloud Lifecycle Manager. You will be prompted to enter an optional SSH passphrase, which is used to protect the key used by Ansible when connecting to its client nodes. If you do not want to use a passphrase, press Enter at the prompt.

If you have protected the SSH key with a passphrase, you can avoid having to enter the passphrase on every attempt by Ansible to connect to its client nodes with the following commands:

ardana > eval $(ssh-agent)
ardana > ssh-add ~/.ssh/id_rsa

The Cloud Lifecycle Manager will contain the installation scripts and configuration files to deploy your cloud. You can set up the Cloud Lifecycle Manager on a dedicated node or you do so on your first controller node. The default choice is to use the first controller node as the Cloud Lifecycle Manager.

  1. Download the product from:

    1. SUSE Customer Center

  2. Boot your Cloud Lifecycle Manager from the SLES ISO contained in the download.

  3. Enter install (all lower-case, exactly as spelled out here) to start installation.

  4. Select the language. Note that only the English language selection is currently supported.

  5. Select the location.

  6. Select the keyboard layout.

  7. Select the primary network interface, if prompted:

    1. Assign IP address, subnet mask, and default gateway

  8. Create new account:

    1. Enter a username.

    2. Enter a password.

    3. Enter time zone.

Once the initial installation is finished, complete the Cloud Lifecycle Manager setup with these steps:

  1. Ensure your Cloud Lifecycle Manager has a valid DNS nameserver specified in /etc/resolv.conf.

  2. Set the environment variable LC_ALL:

    export LC_ALL=C
    Note
    Note

    This can be added to ~/.bashrc or /etc/bash.bashrc.

The node should now have a working SLES setup.

18.4 Configure Your Environment Edit source

This part of the install is going to depend on the specific cloud configuration you are going to use.

Setup your configuration files, as follows:

  1. See the sample sets of configuration files in the ~/openstack/examples/ directory. Each set will have an accompanying README.md file that explains the contents of each of the configuration files.

  2. Copy the example configuration files into the required setup directory and edit them to contain the details of your environment:

    cp -r ~/openstack/examples/entry-scale-swift/* \
      ~/openstack/my_cloud/definition/
  3. Begin inputting your environment information into the configuration files in the ~/openstack/my_cloud/definition directory.

    Full details of how to do this can be found here: Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 11 “Modifying Example Configurations for Object Storage using Swift”, Section 11.10 “Understanding Swift Ring Specifications”, Section 11.10.1 “Ring Specifications in the Input Model”.

    In many cases, the example models provide most of the data you need to create a valid input model. However, there are two important aspects you must plan and configure before starting a deploy as follows:

    • Check the disk model used by your nodes. Specifically, check that all disk drives are correctly named and used as described in Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 11 “Modifying Example Configurations for Object Storage using Swift”, Section 11.6 “Swift Requirements for Device Group Drives”.

    • Select an appropriate partition power for your rings. Detailed information about this is provided at Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 11 “Modifying Example Configurations for Object Storage using Swift”, Section 11.10 “Understanding Swift Ring Specifications”.

Optionally, you can use the ardanaencrypt.py script to encrypt your IPMI passwords. This script uses OpenSSL.

  1. Change to the Ansible directory:

    cd ~/openstack/ardana/ansible
  2. Put the encryption key into the following environment variable:

    export ARDANA_USER_PASSWORD_ENCRYPT_KEY=<encryption key>
  3. Run the python script below and follow the instructions. Enter a password that you want to encrypt.

    ardanaencrypt.py
  4. Take the string generated and place it in the "ilo_password" field in your ~/openstack/my_cloud/definition/data/servers.yml file, remembering to enclose it in quotes.

  5. Repeat the above for each server.

Note
Note

Before you run any playbooks, remember that you need to export the encryption key in the following environment variable: export ARDANA_USER_PASSWORD_ENCRYPT_KEY=<encryption key>

Commit your configuration to the local git repo (Chapter 10, Using Git for Configuration Management), as follows:

cd ~/openstack/ardana/ansible
git add -A
git commit -m "My config or other commit message"
Important
Important

This step needs to be repeated any time you make changes to your configuration files before you move onto the following steps. See Chapter 10, Using Git for Configuration Management for more information.

18.5 Provisioning Your Baremetal Nodes Edit source

To provision the baremetal nodes in your cloud deployment you can either use the automated operating system installation process provided by SUSE OpenStack Cloud or you can use the 3rd party installation tooling of your choice. We will outline both methods below:

18.5.1 Using Third Party Baremetal Installers Edit source

If you do not wish to use the automated operating system installation tooling included with SUSE OpenStack Cloud then the requirements that have to be met using the installation tooling of your choice are:

  • The operating system must be installed via the SLES ISO provided on the SUSE Customer Center.

  • Each node must have SSH keys in place that allows the same user from the Cloud Lifecycle Manager node who will be doing the deployment to SSH to each node without a password.

  • Passwordless sudo needs to be enabled for the user.

  • There should be a LVM logical volume as /root on each node.

  • If the LVM volume group name for the volume group holding the root LVM logical volume is ardana-vg, then it will align with the disk input models in the examples.

  • Ensure that openssh-server, python, python-apt, and rsync are installed.

If you chose this method for installing your baremetal hardware, skip forward to the step Running the Configuration Processor.

18.5.2 Using the Automated Operating System Installation Provided by SUSE OpenStack Cloud Edit source

If you would like to use the automated operating system installation tools provided by SUSE OpenStack Cloud, complete the steps below.

18.5.2.1 Deploying Cobbler Edit source

This phase of the install process takes the baremetal information that was provided in servers.yml and installs the Cobbler provisioning tool and loads this information into Cobbler. This sets each node to netboot-enabled: true in Cobbler. Each node will be automatically marked as netboot-enabled: false when it completes its operating system install successfully. Even if the node tries to PXE boot subsequently, Cobbler will not serve it. This is deliberate so that you cannot reimage a live node by accident.

The cobbler-deploy.yml playbook prompts for a password - this is the password that will be encrypted and stored in Cobbler, which is associated with the user running the command on the Cloud Lifecycle Manager, that you will use to log in to the nodes via their consoles after install. The username is the same as the user set up in the initial dialogue when installing the Cloud Lifecycle Manager from the ISO, and is the same user that is running the cobbler-deploy play.

Note
Note

When imaging servers with your own tooling, it is still necessary to have ILO/IPMI settings for all nodes. Even if you are not using Cobbler, the username and password fields in servers.yml need to be filled in with dummy settings. For example, add the following to servers.yml:

ilo-user: manual
ilo-password: deployment
  1. Run the following playbook which confirms that there is IPMI connectivity for each of your nodes so that they are accessible to be re-imaged in a later step:

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost bm-power-status.yml
  2. Run the following playbook to deploy Cobbler:

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost cobbler-deploy.yml

18.5.2.2 Imaging the Nodes Edit source

This phase of the install process goes through a number of distinct steps:

  1. Powers down the nodes to be installed

  2. Sets the nodes hardware boot order so that the first option is a network boot.

  3. Powers on the nodes. (The nodes will then boot from the network and be installed using infrastructure set up in the previous phase)

  4. Waits for the nodes to power themselves down (this indicates a successful install). This can take some time.

  5. Sets the boot order to hard disk and powers on the nodes.

  6. Waits for the nodes to be reachable by SSH and verifies that they have the signature expected.

Deploying nodes has been automated in the Cloud Lifecycle Manager and requires the following:

  • All of your nodes using SLES must already be installed, either manually or via Cobbler.

  • Your input model should be configured for your SLES nodes, according to the instructions at Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 10 “Modifying Example Configurations for Compute Nodes”, Section 10.1 “SLES Compute Nodes”.

  • You should have run the configuration processor and the ready-deployment.yml playbook.

Execute the following steps to re-image one or more nodes after you have run the ready-deployment.yml playbook.

  1. Run the following playbook, specifying your SLES nodes using the nodelist. This playbook will reconfigure Cobbler for the nodes listed.

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook prepare-sles-grub2.yml -e \
          nodelist=node1[,node2,node3]
  2. Re-image the node(s) with the following command:

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost bm-reimage.yml \
          -e nodelist=node1[,node2,node3]

If a nodelist is not specified then the set of nodes in Cobbler with netboot-enabled: True is selected. The playbook pauses at the start to give you a chance to review the set of nodes that it is targeting and to confirm that it is correct.

You can use the command below which will list all of your nodes with the netboot-enabled: True flag set:

sudo cobbler system find --netboot-enabled=1

18.6 Running the Configuration Processor Edit source

Once you have your configuration files setup, you need to run the configuration processor to complete your configuration.

When you run the configuration processor, you will be prompted for two passwords. Enter the first password to make the configuration processor encrypt its sensitive data, which consists of the random inter-service passwords that it generates and the ansible group_vars and host_vars that it produces for subsequent deploy runs. You will need this password for subsequent Ansible deploy and configuration processor runs. If you wish to change an encryption password that you have already used when running the configuration processor then enter the new password at the second prompt, otherwise just press Enter to bypass this.

Run the configuration processor with this command:

ardana > cd ~/openstack/ardana/ansible
ardana > ansible-playbook -i hosts/localhost config-processor-run.yml

For automated installation (for example CI), you can specify the required passwords on the ansible command line. For example, the command below will disable encryption by the configuration processor:

ardana > ansible-playbook -i hosts/localhost config-processor-run.yml \
  -e encrypt="" -e rekey=""

If you receive an error during this step, there is probably an issue with one or more of your configuration files. Verify that all information in each of your configuration files is correct for your environment. Then commit those changes to Git using the instructions in the previous section before re-running the configuration processor again.

For any troubleshooting information regarding these steps, see Section 23.2, “Issues while Updating Configuration Files”.

18.7 Deploying the Cloud Edit source

  1. Use the playbook below to create a deployment directory:

    cd ~/openstack/ardana/ansible
    ansible-playbook -i hosts/localhost ready-deployment.yml
  2. [OPTIONAL] - Run the wipe_disks.yml playbook to ensure all of your non-OS partitions on your nodes are completely wiped before continuing with the installation. The wipe_disks.yml playbook is only meant to be run on systems immediately after running bm-reimage.yml. If used for any other case, it may not wipe all of the expected partitions.

    If you are using fresh machines this step may not be necessary.

    ardana > cd ~/scratch/ansible/next/ardana/ansible
    ardana > ansible-playbook -i hosts/verb_hosts wipe_disks.yml

    If you have used an encryption password when running the configuration processor use the command below and enter the encryption password when prompted:

    ardana > ansible-playbook -i hosts/verb_hosts wipe_disks.yml --ask-vault-pass
  3. Run the site.yml playbook below:

    ardana > cd ~/scratch/ansible/next/ardana/ansible
    ardana > ansible-playbook -i hosts/verb_hosts site.yml

    If you have used an encryption password when running the configuration processor use the command below and enter the encryption password when prompted:

    ardana > ansible-playbook -i hosts/verb_hosts site.yml --ask-vault-pass
    Note
    Note

    The step above runs osconfig to configure the cloud and ardana-deploy to deploy the cloud. Therefore, this step may run for a while, perhaps 45 minutes or more, depending on the number of nodes in your environment.

  4. Verify that the network is working correctly. Ping each IP in the /etc/hosts file from one of the controller nodes.

For any troubleshooting information regarding these steps, see Section 23.3, “Issues while Deploying the Cloud”.

Note
Note
  • HPE Smart Storage Administrator (HPE SSA) CLI component will have to be installed on all control nodes that are Swift nodes, in order to generate the following Swift metrics:

    • swiftlm.hp_hardware.hpssacli.smart_array

    • swiftlm.hp_hardware.hpssacli.logical_drive

    • swiftlm.hp_hardware.hpssacli.smart_array.firmware

    • swiftlm.hp_hardware.hpssacli.physical_drive

  • HPE-specific binaries that are not based on open source are distributed directly from and supported by HPE. To download and install the SSACLI utility to enable management of disk controllers, please refer to: https://support.hpe.com/hpsc/swd/public/detail?swItemId=MTX_3d16386b418a443388c18da82f

  • After the HPE SSA CLI component is installed on the Swift nodes, the metrics will be generated automatically during the next agent polling cycle. Manual reboot of the node is not required.

18.8 Post-Installation Verification and Administration Edit source

We recommend verifying the installation using the instructions in Chapter 26, Cloud Verification.

There are also a list of other common post-installation administrative tasks listed in the Chapter 32, Other Common Post-Installation Tasks list.

Print this page