For information about when to use the GUI installer and when to use the command line (CLI), see Chapter 1, Overview.
Review the Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 2 “Hardware and Software Support Matrix” that we have listed.
Review the release notes to make yourself aware of any known issues and limitations.
The installation process can occur in different phases. For example, you can install the control plane only and then add Compute nodes afterwards if you would like.
If you run into issues during installation, we have put together a list of Chapter 24, Troubleshooting the Installation you can reference.
Make sure all disks on the system(s) are wiped before you begin the install. (For Swift, refer to Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 11 “Modifying Example Configurations for Object Storage using Swift”, Section 11.6 “Swift Requirements for Device Group Drives”.)
There is no requirement to have a dedicated network for OS-install and system deployment, this can be shared with the management network. More information can be found in Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 9 “Example Configurations”.
The terms deployer and Cloud Lifecycle Manager are used interchangeably. They refer to the same nodes in your cloud environment.
When running the Ansible playbook in this installation guide, if a runbook
fails you will see in the error response to use the
--limit
switch when retrying a playbook. This should be
avoided. You can simply re-run any playbook without this switch.
DVR is not supported with ESX compute.
When you attach a Cinder volume to the VM running on the ESXi host, the volume will not get detected automatically. Make sure to set the image metadata vmware_adaptertype=lsiLogicsas for image before launching the instance. This will help to discover the volume change appropriately.
The installation process will create several OpenStack roles. Not all roles will be relevant for a cloud with Swift only, but they will not cause problems.
Review the Chapter 2, Pre-Installation Checklist about recommended pre-installation tasks.
Prepare the Cloud Lifecycle Manager node. The Cloud Lifecycle Manager must be accessible either directly or via
ssh
, and have SUSE Linux Enterprise Server 12 SP3 installed. All nodes must
be accessible to the Cloud Lifecycle Manager. If the nodes do not have direct access to
online Cloud subscription channels, the Cloud Lifecycle Manager node will need to host the
Cloud repositories.
If you followed the installation instructions for Cloud Lifecycle Manager server (see Chapter 3, Installing the Cloud Lifecycle Manager server), HPE Helion OpenStack software should already be installed. Double-check whether SUSE Linux Enterprise and HPE Helion OpenStack are properly registered at the SUSE Customer Center by starting YaST and running › .
If you have not yet installed HPE Helion OpenStack, do so by starting YaST and
running › › . Choose
and follow the on-screen instructions. Make sure to register
HPE Helion OpenStack during the installation process and to install the software
pattern patterns-cloud-ardana
.
tux >
sudo zypper -n in patterns-cloud-ardana
Ensure the HPE Helion OpenStack media repositories and updates repositories are made available to all nodes in your deployment. This can be accomplished either by configuring the Cloud Lifecycle Manager server as an SMT mirror as described in Chapter 4, Installing and Setting Up an SMT Server on the Cloud Lifecycle Manager server (Optional) or by syncing or mounting the Cloud and updates repositories to the Cloud Lifecycle Manager server as described in Chapter 5, Software Repository Setup.
Configure passwordless sudo
for the user created when
setting up the node (as described in Section 3.4, “Creating a User”). Note that this is
not the user ardana
that will be used later in this
procedure. In the following we assume you named the user cloud
. Run the command
visudo
as user root
and add the following line
to the end of the file:
CLOUD ALL = (root) NOPASSWD:ALL
Make sure to replace CLOUD with your user name choice.
Set the password for the user
ardana
:
tux >
sudo passwd ardana
Become the user ardana
:
tux >
su - ardana
Place a copy of the SUSE Linux Enterprise Server 12 SP3 .iso
in the
ardana
home directory,
var/lib/ardana
, and rename it to
sles12sp3.iso
.
Install the templates, examples, and working model directories:
ardana >
/usr/bin/ardana-init
You have already configured an input model for a stand-alone deployer in a previous step (Chapter 8, Preparing for Stand-Alone Deployment). Now that input model needs to be moved into the setup directory.
ardana >
cp -r ~/openstack/examples/entry-scale-kvm-stand-alone-deployer/* \
~/openstack/my_cloud/definition/
(Optional)
You can use the ardanaencrypt.py
script to encrypt
your IPMI passwords. This script uses OpenSSL.
Change to the Ansible directory:
ardana >
cd ~/openstack/ardana/ansible
Enter the encryption key into the following environment variable:
ardana >
export ARDANA_USER_PASSWORD_ENCRYPT_KEY=<encryption key>
Run the python script below and follow the instructions. Enter a password that you want to encrypt.
ardana >
./ardanaencrypt.py
Take the string generated and place it in the
ilo-password
field in your
~/openstack/my_cloud/definition/data/servers.yml
file, remembering to enclose it in quotes.
Repeat the above for each server.
Before you run any playbooks, remember that you need to export the
encryption key in the following environment variable: export
ARDANA_USER_PASSWORD_ENCRYPT_KEY=<encryption key>
Commit your configuration to the local git repo (Chapter 10, Using Git for Configuration Management), as follows:
ardana >
cd ~/openstack/ardana/ansibleardana >
git add -Aardana >
git commit -m "My config or other commit message"
This step needs to be repeated any time you make changes to your configuration files before you move on to the following steps. See Chapter 10, Using Git for Configuration Management for more information.
Once you have your configuration files setup, you need to run the configuration processor to complete your configuration.
When you run the configuration processor, you will be prompted for two
passwords. Enter the first password to make the configuration processor
encrypt its sensitive data, which consists of the random inter-service
passwords that it generates and the ansible group_vars
and host_vars
that it produces for subsequent deploy
runs. You will need this password for subsequent Ansible deploy and
configuration processor runs. If you wish to change an encryption password
that you have already used when running the configuration processor then
enter the new password at the second prompt, otherwise just press
Enter to bypass this.
Run the configuration processor with this command:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost config-processor-run.yml
For automated installation (for example CI), you can specify the required passwords on the ansible command line. For example, the command below will disable encryption by the configuration processor:
ardana >
ansible-playbook -i hosts/localhost config-processor-run.yml \
-e encrypt="" -e rekey=""
If you receive an error during this step, there is probably an issue with one or more of your configuration files. Verify that all information in each of your configuration files is correct for your environment. Then commit those changes to Git using the instructions in the previous section before re-running the configuration processor again.
For any troubleshooting information regarding these steps, see Section 24.2, “Issues while Updating Configuration Files”.
This section is optional, but recommended, for a HPE Helion OpenStack installation.
After you run the configuration processor the first time, the IP addresses
for your environment will be generated and populated in the
~/openstack/my_cloud/info/address_info.yml
file. At
this point, consider whether to configure TLS and set up an SSL certificate
for your environment. Please read Chapter 30, Configuring Transport Layer Security (TLS) before proceeding
for how to achieve this.
Use the playbook below to create a deployment directory:
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost ready-deployment.yml
[OPTIONAL] - Run the wipe_disks.yml
playbook to ensure
all of your non-OS partitions on your nodes are completely wiped before
continuing with the installation. The wipe_disks.yml
playbook is only meant to be run on systems immediately after running
bm-reimage.yml
. If used for any other case, it may
not wipe all of the expected partitions.
If you are using fresh machines this step may not be necessary.
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts wipe_disks.yml
If you have used an encryption password when running the configuration processor use the command below and enter the encryption password when prompted:
ardana >
ansible-playbook -i hosts/verb_hosts wipe_disks.yml --ask-vault-pass
Run the site.yml
playbook below:
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts site.yml
If you have used an encryption password when running the configuration processor use the command below and enter the encryption password when prompted:
ardana >
ansible-playbook -i hosts/verb_hosts site.yml --ask-vault-pass
The step above runs osconfig
to configure the cloud
and ardana-deploy
to deploy the cloud. Therefore, this
step may run for a while, perhaps 45 minutes or more, depending on the
number of nodes in your environment.
Verify that the network is working correctly. Ping each IP in the
/etc/hosts
file from one of the controller nodes.
For any troubleshooting information regarding these steps, see Section 24.3, “Issues while Deploying the Cloud”.
The OpenStack CLI and OpenStack clients will not be installed automatically. If you require access to these clients, you will need to follow the procedure below to add the appropriate software.
[OPTIONAL] To confirm that OpenStack clients have not been installed, connect to your stand-alone deployer and try to use the OpenStack CLI:
ardana >
source ~/keystone.osrcardana >
openstack project list -bash: openstack: command not found
Edit the configuration file containing details of your Control Plane,
~/openstack/my_cloud/definition/data/control_plane.yml
Locate the stanza for the cluster where you want to install the client(s). This will look like the following extract:
clusters: - name: cluster0 cluster-prefix: c0 server-role: LIFECYCLE-MANAGER-ROLE member-count: 1 allocation-policy: strict service-components: - ntp-server - lifecycle-manager
Choose the client(s) you wish to install from the following list of available clients:
- barbican-client - ceilometer-client - cinder-client - designate-client - glance-client - heat-client - ironic-client - keystone-client - magnum-client - manila-client - monasca-client - neutron-client - nova-client - ntp-client - octavia-client - openstack-client - swift-client
Add the client(s) to the list of service-components
-
in the following example, several OpenStack clients are added to the
stand-alone deployer:
clusters:
- name: cluster0
cluster-prefix: c0
server-role: LIFECYCLE-MANAGER-ROLE
member-count: 1
allocation-policy: strict
service-components:
- ntp-server
- lifecycle-manager
- openstack-client
- ceilometer-client
- cinder-client
- designate-client
- glance-client
- heat-client
- ironic-client
- keystone-client
- neutron-client
- nova-client
- swift-client
- monasca-client
- barbican-client
Commit the configuration changes:
ardana >
cd ~/openstack/ardana/ansibleardana >
git add -Aardana >
git commit -m "Add explicit client service deployment"
Run the configuration processor, followed by the
ready-deployment
playbook:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost config-processor-run.yml -e encrypt="" \ -e rekey=""ardana >
ansible-playbook -i hosts/localhost ready-deployment.yml
Add the software for the clients using the following command:
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts clients-upgrade.yml
Check that the software has been installed correctly. Using the same test that was unsuccessful before, connect to your stand-alone deployer and try to use the OpenStack CLI:
ardana >
source ~/keystone.osrcardana >
openstack project list
You should now see a list of projects returned:
ardana >
openstack project list
+----------------------------------+------------------+
| ID | Name |
+----------------------------------+------------------+
| 076b6e879f324183bbd28b46a7ee7826 | kronos |
| 0b81c3a9e59c47cab0e208ea1bb7f827 | backup |
| 143891c2a6094e2988358afc99043643 | octavia |
| 1d3972a674434f3c95a1d5ed19e0008f | glance-swift |
| 2e372dc57cac4915bf06bbee059fc547 | glance-check |
| 383abda56aa2482b95fb9da0b9dd91f4 | monitor |
| 606dd3b1fa6146668d468713413fb9a6 | swift-monitor |
| 87db9d1b30044ea199f0293f63d84652 | admin |
| 9fbb7494956a483ca731748126f50919 | demo |
| a59d0c682474434a9ddc240ddfe71871 | services |
| a69398f0f66a41b2872bcf45d55311a7 | swift-dispersion |
| f5ec48d0328d400992c1c5fb44ec238f | cinderinternal |
+----------------------------------+------------------+
We recommend verifying the installation using the instructions in Chapter 27, Cloud Verification.
There are also a list of other common post-installation administrative tasks listed in the Chapter 33, Other Common Post-Installation Tasks list.