HPE Helion OpenStack 8 supports RHEL compute nodes, specifically RHEL 7.5. HPE does not ship a Red Hat ISO with HPE Helion OpenStack so you will need to provide a copy of the standard RHEL 7.5 ISO that can be downloaded from the Red Hat website.
There are two approaches for deploying RHEL compute nodes in HPE Helion OpenStack:
Using the Cloud Lifecycle Manager to automatically deploy RHEL Compute Nodes.
Provisioning RHEL nodes yourself, either manually or using a third-party tool, and then provide the relevant information to the Cloud Lifecycle Manager.
These two approaches can be used whether you are installing a cloud for the first time or adding a compute node to an existing cloud.
Red Hat Enterprise Linux (RHEL) Host OS KVM and/or supported RHEL guests have been tested and qualified by HPE to run on HPE Helion OpenStack. HPE is one of the largest OEMs of Red Hat with follow the sun global support coverage.
One Number to Call: HPE customers who have purchased both HPE Helion OpenStack and RHEL subscriptions with support from RHEL will have one number to call for troubleshooting, fault isolation and support from HPE technical support specialists in HPE Helion OpenStack and Red Hat technologies. If the problem is isolated to RHEL software itself the issue will be replicated on a Red Hat certified platform and escalated to Red Hat for resolution.
A Dual Support Model: HPE will troubleshoot and fault isolate an issue at the HPE Helion OpenStack software level. If HPE Helion OpenStack software is excluded as the cause of the problem, then customers who did not purchase RHEL support from HPE will be directed to the vendor from whom they purchased RHEL for continued support.
This section outlines how to manually provision a RHEL node, so that it can be added to a new or existing cloud created with HPE Helion OpenStack.
Install RHEL 7.5 using the standard installation ISO.
Use the ip addr
command to find out what network
devices are on your system:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether f0:92:1c:05:89:70 brd ff:ff:ff:ff:ff:ff inet 10.13.111.178/26 brd 10.13.111.191 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::f292:1cff:fe05:8970/64 scope link valid_lft forever preferred_lft forever 3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether f0:92:1c:05:89:74 brd ff:ff:ff:ff:ff:ff
Identify the entry that matches the MAC address of your server and edit
the corresponding configuration file in
/etc/sysconfig/network-scripts
:
vi /etc/sysconfig/network-scripts/ifcfg-eno1
Edit the IPADDR
and NETMASK
values
to match your environment. Note that the IPADDR
is used
in the corresponding stanza in servers.yml
. You may
also need to set BOOTPROTO
to none
.
TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=eno1 UUID=36060f7a-12da-469b-a1da-ee730a3b1d7c DEVICE=eno1 ONBOOT=yes NETMASK=255.255.255.192 IPADDR=10.13.111.14
(Optional) Reboot your RHEL node and ensure that it can be accessed from the Cloud Lifecycle Manager.
ardana
#useradd -m ardana passwd ardana
ardana
to Use sudo
Without Password #
There are a number of different ways to achieve this. Here is one
possibility using the pre-existing
wheel
group.
Add the user ardana
to the
wheel
group.
usermod -aG wheel ardana
Run the command visudo
.
Uncomment the line specifying NOPASSWD: ALL
for the
wheel
group.
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
## Same thing without a password
%wheel ALL=(ALL) NOPASSWD: ALL
To facilitate using SSH from the deployer and running a command via
sudo
, comment the lines for
requiretty
and !visiblepw
# # Disable "ssh hostname sudo <cmd>", because it will show the password in clear. # You have to run "ssh -t hostname sudo <cmd>". # #Defaults requiretty # # Refuse to run if unable to disable echo on the tty. This setting should also be # changed in order to be able to use sudo without a tty. See requiretty above. # #Defaults !visiblepw
This section is only required if RHEL node is set up manually, You need to
set up a yum repository, either external or local, containing a RHEL
distribution supported by HPE Helion OpenStack. This repository must mirror the
entire product repository including the ResilientStorage
and HighAvailability
add-ons. To create this repository,
perform these steps in compute node:
Mount the RHEL ISO and expand it:
mkdir /tmp/localrhel mount -o loop rhel7.iso /mnt cd /mnt tar cvf - . | (cd /tmp/localrhel ; tar xvf -) cd / umount /mnt
Create a repository file named
/etc/yum.repos.d/localrhel.repo
with the following
contents:
[localrhel] name=localrhel baseurl=file:///tmp/localrhel enabled=1 gpgcheck=0 [localrhel-1] name=localrhel-1 baseurl=file:///tmp/localrhel/addons/ResilientStorage enabled=1 gpgcheck=0 [localrhel-2] name=localrhel-2 baseurl=file:///tmp/localrhel/addons/HighAvailability enabled=1 gpgcheck=0
Run:
yum clean all
As documented in Section 12.4, “Provisioning Your Baremetal Nodes”, you will need to add some extra packages that are required. Ensure that openssh-server, python, and rsync are installed.
After you have started your installation using the Cloud Lifecycle Manager, or if you are
adding a RHEL node to an existing cloud, you need to copy the deployer
public key to the RHEL node. One way of doing this is to copy the
~/.ssh/authorized_keys
from another node in
the cloud to the same location on the RHEL node. If you are installing a
new cloud, this file will be available on the nodes after running the
bm-reimage.yml
playbook.
Ensure that there is global read access to the file
~/.ssh/authorized_keys
.
Now test passwordless SSH from the deployer and check your ability to remotely execute sudo commands:
ssh ardana@IP_OF_RHEL_NODE "sudo tail -5 /var/log/messages"
This section outlines how to install a RHEL Compute Node as a member of a a new or existing cloud created with HPE Helion OpenStack.
Ensure that your environment includes a Subscription Management Tool (SMT) server, as described in Chapter 4, Installing and Setting Up an SMT Server on the Cloud Lifecycle Manager server (Optional). Run the following steps on that server to configure an RPM repository for use by RHEL clients.
Enable SMT mirroring of the external CentOS 7.5 repository to distribute RHEL-compatible packages to nodes in the cloud.
tux >
sudo smt-setup-custom-repos --name CentOS --description "CentOS 7.5" --productid 100682 \ --exturl https://download.opensuse.org/repositories/systemsmanagement:/Ardana:/8:/CentOS:/7.5/CentOS_7.5/tux >
sudo smt-repos -e CentOStux >
sudo smt-synctux >
sudo smt-mirror
CentOS RPM packages will now be stored on the SMT server in
/srv/www/htdocs/repo/RPMMD/CentOS
. If your deployer
node also operates as your SMT server, publish this content to other nodes
with the following command:
sudo ln -s /srv/www/htdocs/repo/RPMMD/CentOS /opt/ardana_packager/ardana/rhel7/yum/centos
Or, if your SMT server is hosted separately from your deployer, follow the
same steps for this repository as you are doing to mirror other repos, ensuring
that it results in hosting the repo on the deployer at
/opt/ardana_packager/ardana/rhel7/yum/centos
.
Add this new repo as a yum package source to cloud nodes by populating
/etc/yum.repos.d/ardana-centos.repo
in each
RHEL system with the following contents:
[ardana-centos] name=Ardana CentOS Repository baseurl=http://DEPLOYER-IP:79/ardana/rhel7/yum/centos enabled=1 gpgcheck=0
Add the new repository to the deployer node as well (and accept its certificate), but with lower priority than other repositories. This ensures that packages present in both repos are preferably installed from the SUSE Linux Enterprise Server repositories.
sudo zypper ar -p 100 file:/opt/ardana_packager/ardana/rhel7/yum/centos centos sudo zypper ref
OpenStack components are installed on Red Hat Enterprise Linux Compute Nodes
via virtualenv (venv). To facilitate this, certain packages
must be installed from the centos
repository
on the deployer node:
for i in nova neutron monasca_agent do sudo zypper in -y venv-openstack-$i-rhel-x86_64 done
Once these packages are installed, they will populate a new package directory which must be prepared to serve as a RHEL yum repository:
sudo create_index --dir=/opt/ardana_packager/ardana-8/rhel_venv/x86_64
RHBA-2018:2198
to be installed for proper DVR
functionality. (If this is not done, attempts to create Floating IP
resources for VMs on RHEL Compute Nodes will fail, as documented in the
Red Hat
Knowledge Base.) This update can be downloaded via an active RHEL
license, and should be applied after the node's operating system is
installed. If this results in a change to the kernel version (to the
target of 3.10.0-862), a reboot will be required.
To use the OpenStack Pike version of
nova-compute
and qemu-kvm
,
virtualization packages with versions of at least 2.10.0 must be installed
on RHEL nodes.
The RPM files needed for these packages must be
provided before executing site.yml
. It is expected
that a Red Hat Virtualization product subscription will be used to provide
these RPM packages, ensuring that the latest version is installed and
that continuous updates will be available.
qemu-img-rhev
qemu-kvm-common-rhev
qemu-kvm-rhev
qemu-kvm-tools-rhev
SELinux policy updates are needed for
nova-compute
to work properly. SELinux policy updates can be
enabled by changing a flag in default configuration. This needs to be
done on the Cloud Lifecycle Manager before nova-compute
is installed on RHEL nodes.
On the Cloud Lifecycle Manager node, edit the following file:
cd ~/openstack/ardana/ansible vi roles/NOV-CMP-KVM/defaults/main.yml
Set the SELinux flag to true
:
nova_rhel_compute_apply_selinux_policy_updates: true
Save and close the file.
Commit the change to Git:
git commit -a --allow-empty -m "Enable SELinux policy updates for compute nodes"
Publish the changes so that they are included in playbook runs:
ansible-playbook -i hosts/localhost ready-deployment.yml
Continue to deploy OpenStack services on RHEL nodes by providing the
names of each RHEL node (as defined in hosts/verb_hosts
)
in the following command:
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts site.yml -l "NODE1_NAME,NODE2_NAME"
If you have used an encryption password when running the configuration
processor, include the additional parameter --ask-vault-pass
to the end of the above command.