24 Installing Mid-scale and Entry-scale KVM #
24.1 Important Notes #
For information about when to use the GUI installer and when to use the command line (CLI), see Chapter 13, Overview.
Review the Chapter 2, Hardware and Software Support Matrix that we have listed.
Review the release notes to make yourself aware of any known issues and limitations.
The installation process can occur in different phases. For example, you can install the control plane only and then add Compute nodes afterwards if you would like.
If you run into issues during installation, we have put together a list of Chapter 36, Troubleshooting the Installation you can reference.
Make sure all disks on the system(s) are wiped before you begin the install. (For swift, refer to Section 11.6, “Swift Requirements for Device Group Drives”.)
There is no requirement to have a dedicated network for OS-install and system deployment, this can be shared with the management network. More information can be found in Chapter 9, Example Configurations.
The terms deployer and Cloud Lifecycle Manager are used interchangeably. They refer to the same nodes in your cloud environment.
When running the Ansible playbook in this installation guide, if a runbook fails you will see in the error response to use the
--limit
switch when retrying a playbook. This should be avoided. You can simply re-run any playbook without this switch.DVR is not supported with ESX compute.
When you attach a cinder volume to the VM running on the ESXi host, the volume will not get detected automatically. Make sure to set the image metadata vmware_adaptertype=lsiLogicsas for image before launching the instance. This will help to discover the volume change appropriately.
The installation process will create several OpenStack roles. Not all roles will be relevant for a cloud with swift only, but they will not cause problems.
24.2 Prepare for Cloud Installation #
Review the Chapter 14, Pre-Installation Checklist about recommended pre-installation tasks.
Prepare the Cloud Lifecycle Manager node. The Cloud Lifecycle Manager must be accessible either directly or via
ssh
, and have SUSE Linux Enterprise Server 12 SP4 installed. All nodes must be accessible to the Cloud Lifecycle Manager. If the nodes do not have direct access to online Cloud subscription channels, the Cloud Lifecycle Manager node will need to host the Cloud repositories.If you followed the installation instructions for Cloud Lifecycle Manager server (see Chapter 15, Installing the Cloud Lifecycle Manager server), SUSE OpenStack Cloud software should already be installed. Double-check whether SUSE Linux Enterprise and SUSE OpenStack Cloud are properly registered at the SUSE Customer Center by starting YaST and running › .
If you have not yet installed SUSE OpenStack Cloud, do so by starting YaST and running › › . Choose and follow the on-screen instructions. Make sure to register SUSE OpenStack Cloud during the installation process and to install the software pattern
patterns-cloud-ardana
.tux >
sudo zypper -n in patterns-cloud-ardanaEnsure the SUSE OpenStack Cloud media repositories and updates repositories are made available to all nodes in your deployment. This can be accomplished either by configuring the Cloud Lifecycle Manager server as an SMT mirror as described in Chapter 16, Installing and Setting Up an SMT Server on the Cloud Lifecycle Manager server (Optional) or by syncing or mounting the Cloud and updates repositories to the Cloud Lifecycle Manager server as described in Chapter 17, Software Repository Setup.
Configure passwordless
sudo
for the user created when setting up the node (as described in Section 15.4, “Creating a User”). Note that this is not the userardana
that will be used later in this procedure. In the following we assume you named the usercloud
. Run the commandvisudo
as userroot
and add the following line to the end of the file:CLOUD ALL = (root) NOPASSWD:ALL
Make sure to replace CLOUD with your user name choice.
Set the password for the user
ardana
:tux >
sudo passwd ardanaBecome the user
ardana
:tux >
su - ardanaPlace a copy of the SUSE Linux Enterprise Server 12 SP4
.iso
in theardana
home directory,var/lib/ardana
, and rename it tosles12sp4.iso
.Install the templates, examples, and working model directories:
ardana >
/usr/bin/ardana-init
24.3 Configuring Your Environment #
During the configuration phase of the installation you will be making
modifications to the example configuration input files to match your cloud
environment. You should use the Chapter 9, Example Configurations
documentation for detailed information on how to do this. There is also a
README.md
file included in each of the example
directories on the Cloud Lifecycle Manager that has useful information about the models.
In the steps below we show how to set up the directory structure with the example input files as well as use the optional encryption methods for your sensitive data.
Set up your configuration files, as follows:
Copy the example configuration files into the required setup directory and edit them to contain the details of your environment.
For example, if you want to use the SUSE OpenStack Cloud Mid-scale KVM model, you can use this command to copy the files to your cloud definition directory:
ardana >
cp -r ~/openstack/examples/mid-scale-kvm/* \ ~/openstack/my_cloud/definition/If you want to use the SUSE OpenStack Cloud Entry-scale KVM model, you can use this command to copy the files to your cloud definition directory:
ardana >
cp -r ~/openstack/examples/entry-scale-kvm/* \ ~/openstack/my_cloud/definition/Begin inputting your environment information into the configuration files in the
~/openstack/my_cloud/definition
directory.
(Optional) You can use the
ardanaencrypt.py
script to encrypt your IPMI passwords. This script uses OpenSSL.Change to the Ansible directory:
ardana >
cd ~/openstack/ardana/ansiblePut the encryption key into the following environment variable:
export ARDANA_USER_PASSWORD_ENCRYPT_KEY=<encryption key>
Run the python script below and follow the instructions. Enter a password that you want to encrypt.
ardana >
./ardanaencrypt.pyTake the string generated and place it in the
ilo-password
field in your~/openstack/my_cloud/definition/data/servers.yml
file, remembering to enclose it in quotes.Repeat the above for each server.
NoteBefore you run any playbooks, remember that you need to export the encryption key in the following environment variable:
export ARDANA_USER_PASSWORD_ENCRYPT_KEY=<encryption key>
Commit your configuration to the local git repo (Chapter 22, Using Git for Configuration Management), as follows:
ardana >
cd ~/openstack/ardana/ansibleardana >
git add -Aardana >
git commit -m "My config or other commit message"ImportantThis step needs to be repeated any time you make changes to your configuration files before you move on to the following steps. See Chapter 22, Using Git for Configuration Management for more information.
24.4 Provisioning Your Baremetal Nodes #
To provision the baremetal nodes in your cloud deployment you can either use the automated operating system installation process provided by SUSE OpenStack Cloud or you can use the 3rd party installation tooling of your choice. We will outline both methods below:
24.4.1 Using Third Party Baremetal Installers #
If you do not wish to use the automated operating system installation tooling included with SUSE OpenStack Cloud then the requirements that have to be met using the installation tooling of your choice are:
The operating system must be installed via the SLES ISO provided on the SUSE Customer Center.
Each node must have SSH keys in place that allows the same user from the Cloud Lifecycle Manager node who will be doing the deployment to SSH to each node without a password.
Passwordless sudo needs to be enabled for the user.
There should be a LVM logical volume as
/root
on each node.If the LVM volume group name for the volume group holding the
root
LVM logical volume isardana-vg
, then it will align with the disk input models in the examples.Ensure that
openssh-server
,python
,python-apt
, andrsync
are installed.
If you chose this method for installing your baremetal hardware, skip forward to the step Running the Configuration Processor.
24.4.2 Using the Automated Operating System Installation Provided by SUSE OpenStack Cloud #
If you would like to use the automated operating system installation tools provided by SUSE OpenStack Cloud, complete the steps below.
24.4.2.1 Deploying Cobbler #
This phase of the install process takes the baremetal information that was
provided in servers.yml
and installs the Cobbler
provisioning tool and loads this information into Cobbler. This sets each
node to netboot-enabled: true
in Cobbler. Each node
will be automatically marked as netboot-enabled: false
when it completes its operating system install successfully. Even if the
node tries to PXE boot subsequently, Cobbler will not serve it. This is
deliberate so that you cannot reimage a live node by accident.
The cobbler-deploy.yml
playbook prompts for a password
- this is the password that will be encrypted and stored in Cobbler, which
is associated with the user running the command on the Cloud Lifecycle Manager, that you
will use to log in to the nodes via their consoles after install. The
username is the same as the user set up in the initial dialogue when
installing the Cloud Lifecycle Manager from the ISO, and is the same user that is running
the cobbler-deploy play.
When imaging servers with your own tooling, it is still necessary to have
ILO/IPMI settings for all nodes. Even if you are not using Cobbler, the
username and password fields in servers.yml
need to
be filled in with dummy settings. For example, add the following to
servers.yml
:
ilo-user: manual ilo-password: deployment
Run the following playbook which confirms that there is IPMI connectivity for each of your nodes so that they are accessible to be re-imaged in a later step:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost bm-power-status.ymlRun the following playbook to deploy Cobbler:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost cobbler-deploy.yml
24.4.2.2 Imaging the Nodes #
This phase of the install process goes through a number of distinct steps:
Powers down the nodes to be installed
Sets the nodes hardware boot order so that the first option is a network boot.
Powers on the nodes. (The nodes will then boot from the network and be installed using infrastructure set up in the previous phase)
Waits for the nodes to power themselves down (this indicates a successful install). This can take some time.
Sets the boot order to hard disk and powers on the nodes.
Waits for the nodes to be reachable by SSH and verifies that they have the signature expected.
Deploying nodes has been automated in the Cloud Lifecycle Manager and requires the following:
All of your nodes using SLES must already be installed, either manually or via Cobbler.
Your input model should be configured for your SLES nodes.
You should have run the configuration processor and the
ready-deployment.yml
playbook.
Execute the following steps to re-image one or more nodes after you have
run the ready-deployment.yml
playbook.
Run the following playbook, specifying your SLES nodes using the nodelist. This playbook will reconfigure Cobbler for the nodes listed.
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook prepare-sles-grub2.yml -e \ nodelist=node1[,node2,node3]Re-image the node(s) with the following command:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost bm-reimage.yml \ -e nodelist=node1[,node2,node3]
If a nodelist is not specified then the set of nodes in Cobbler with
netboot-enabled: True
is selected. The playbook pauses
at the start to give you a chance to review the set of nodes that it is
targeting and to confirm that it is correct.
You can use the command below which will list all of your nodes with the
netboot-enabled: True
flag set:
sudo cobbler system find --netboot-enabled=1
24.5 Running the Configuration Processor #
Once you have your configuration files setup, you need to run the configuration processor to complete your configuration.
When you run the configuration processor, you will be prompted for two
passwords. Enter the first password to make the configuration processor
encrypt its sensitive data, which consists of the random inter-service
passwords that it generates and the ansible group_vars
and host_vars
that it produces for subsequent deploy
runs. You will need this password for subsequent Ansible deploy and
configuration processor runs. If you wish to change an encryption password
that you have already used when running the configuration processor then
enter the new password at the second prompt, otherwise just press
Enter to bypass this.
Run the configuration processor with this command:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost config-processor-run.yml
For automated installation (for example CI), you can specify the required passwords on the ansible command line. For example, the command below will disable encryption by the configuration processor:
ardana >
ansible-playbook -i hosts/localhost config-processor-run.yml \
-e encrypt="" -e rekey=""
If you receive an error during this step, there is probably an issue with one or more of your configuration files. Verify that all information in each of your configuration files is correct for your environment. Then commit those changes to Git using the instructions in the previous section before re-running the configuration processor again.
For any troubleshooting information regarding these steps, see Section 36.2, “Issues while Updating Configuration Files”.
24.6 Configuring TLS #
This section is optional, but recommended, for a SUSE OpenStack Cloud installation.
After you run the configuration processor the first time, the IP addresses
for your environment will be generated and populated in the
~/openstack/my_cloud/info/address_info.yml
file. At
this point, consider whether to configure TLS and set up an SSL certificate
for your environment. Please read Chapter 41, Configuring Transport Layer Security (TLS) before proceeding
for how to achieve this.
24.7 Deploying the Cloud #
Use the playbook below to create a deployment directory:
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost ready-deployment.yml
[OPTIONAL] - Run the
wipe_disks.yml
playbook to ensure all of your non-OS partitions on your nodes are completely wiped before continuing with the installation. Thewipe_disks.yml
playbook is only meant to be run on systems immediately after runningbm-reimage.yml
. If used for any other case, it may not wipe all of the expected partitions.If you are using fresh machines this step may not be necessary.
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts wipe_disks.ymlIf you have used an encryption password when running the configuration processor use the command below and enter the encryption password when prompted:
ardana >
ansible-playbook -i hosts/verb_hosts wipe_disks.yml --ask-vault-passRun the
site.yml
playbook below:ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts site.ymlIf you have used an encryption password when running the configuration processor use the command below and enter the encryption password when prompted:
ardana >
ansible-playbook -i hosts/verb_hosts site.yml --ask-vault-passNoteThe step above runs
osconfig
to configure the cloud andardana-deploy
to deploy the cloud. Therefore, this step may run for a while, perhaps 45 minutes or more, depending on the number of nodes in your environment.Verify that the network is working correctly. Ping each IP in the
/etc/hosts
file from one of the controller nodes.
For any troubleshooting information regarding these steps, see Section 36.3, “Issues while Deploying the Cloud”.
24.8 Configuring a Block Storage Backend #
SUSE OpenStack Cloud supports multiple Block Storage backend options. You can use one or more of these for setting up multiple Block Storage backends. Multiple volume types are also supported. For more information on configuring backends, see Configure multiple-storage back ends
default_volume_type
#Create a volume type:
openstack volume type create --public NAME
Volume types do not have to be tied to a backend. They can contain attributes that can be applied to a volume during creation, these are referred to as extra specs. One of those attributes is the
volume_backend_name
. By setting the value to the same as thevolume_backend_name
of a specific backend as defined incinder.conf
, then volumes created with that type will always land on that backend. For example, if thecinder.conf
has an LVM volume backend defined as:[lvmdriver-1] image_volume_cache_enabled = True volume_clear = zero lvm_type = auto target_helper = lioadm volume_group = stack-volumes-lvmdriver-1 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_backend_name = lvmdriver-1
You can tie a volume type to that specific backend by doing:
$ openstack volume type create --public my_lvm_type $ openstack volume type set --property volume_backend_name=lvmdriver-1 my_lvm_type $ openstack volume type show my_lvm_type +--------------------+--------------------------------------+ | Field | Value | +--------------------+--------------------------------------+ | access_project_ids | None | | description | None | | id | a2049509-7789-4949-95e2-89cf7fd5792f | | is_public | True | | name | my_lvm_type | | properties | volume_backend_name='lvmdriver-1' | | qos_specs_id | None | +--------------------+--------------------------------------+
24.9 Post-Installation Verification and Administration #
We recommend verifying the installation using the instructions in Chapter 38, Post Installation Tasks.
There are also a list of other common post-installation administrative tasks listed in the Chapter 44, Other Common Post-Installation Tasks list.