HPE Helion OpenStack 8 supports SLES compute nodes, specifically SUSE Linux Enterprise Server 12 SP3. HPE
does not ship a SLES ISO with HPE Helion OpenStack so you will need to download a copy of
the SLES ISO (SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso
)
and the SLES SDK ISO
(SLE-12-SP3-SDK-DVD-x86_64-GM-DVD1.iso
) from SUSE. You
can use the following
link to download the ISO. To do so, either log in or create a SUSE
account before downloading:
https://www.suse.com/products/server/download/.
There are two approaches for deploying SLES compute nodes in HPE Helion OpenStack:
Using the Cloud Lifecycle Manager to automatically deploy SLES Compute Nodes.
Provisioning SLES nodes yourself, either manually or using a third-party tool, and then providing the relevant information to the Cloud Lifecycle Manager.
These two approaches can be used whether you are installing a cloud for the first time or adding a compute node to an existing cloud. Regardless of your approach, you should be certain to register your SLES compute nodes in order to get product updates as they come available. For more information, see Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 1 “Registering SLES”.
SUSE Linux Enterprise Server (SLES) Host OS KVM and/or supported SLES guests have been tested and qualified by HPE to run on HPE Helion OpenStack. HPE is one of largest SUSE OEMs with follow-the-sun global support coverage.
One Number to Call
HPE customers who have purchased both HPE Helion OpenStack and SLES subscriptions with support from HPE will have one number to call for troubleshooting, fault isolation and support from HPE technical support specialists in HPE Helion OpenStack and SUSE technologies. If the problem is isolated to SLES software itself the issue will be replicated on a SUSE certified platform and escalated to SUSE for resolution.
A Dual Support Model
HPE will troubleshoot and fault isolate an issue at the HPE Helion OpenStack software level. If HPE Helion OpenStack software is excluded as the cause of the problem, then customers who did not purchase SLES support from HPE will be directed to the vendor from whom they purchased SLES for continued support.
The method used for deploying SLES compute nodes using Cobbler on the Cloud Lifecycle Manager uses legacy BIOS.
UEFI and Secure boot are not supported on SLES Compute.
The installation process for legacy BIOS SLES Compute nodes is similar to that described in Chapter 12, Installing Mid-scale and Entry-scale KVM with some additional requirements:
The standard SLES ISO (SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso) must be
accessible as ~/sles12sp3.iso
. Rename the ISO or
create a symbolic link:
mv SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso ~/sles12sp3.iso
You must identify the node(s) on which you want to install SLES, by
adding the key/value pair distro-id: sles12sp3-x86_64
to server details in servers.yml
. If there are any
network interface or disk layout differences in the new server compared to
the servers already in the model, you may also need to update
net_interfaces.yml
,
server_roles.yml
, disk_compute.yml
and control_plane.yml
. For more information on
configuration of the Input Model for SLES, see Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 10 “Modifying Example Configurations for Compute Nodes”, Section 10.1 “SLES Compute Nodes”.
Run the playbook config-processor-run.yml
to check
for errors in the updated model.
Run the ready-deployment.yml
playbook to build the
new scratch
directory.
Record the management network IP address that is used for the new server. It will be used in the installation process.
Deploying UEFI nodes has been automated in the Cloud Lifecycle Manager and requires the following to be met:
All of your nodes using SLES must already be installed, either manually or via Cobbler.
Your input model should be configured for your SLES nodes, per the instructions at Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 10 “Modifying Example Configurations for Compute Nodes”, Section 10.1 “SLES Compute Nodes”.
You should have run the configuration processor and the
ready-deployment.yml
playbook.
Execute the following steps to re-image one or more nodes after you have run
the ready-deployment.yml
playbook.
Run the following playbook, ensuring that you specify only your UEFI SLES nodes using the nodelist. This playbook will reconfigure Cobbler for the nodes listed.
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook prepare-sles-grub2.yml -e nodelist=node1[,node2,node3]
Re-image the node(s), ensuring that you only specify your UEFI SLES nodes using the nodelist.
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost bm-reimage.yml \ -e nodelist=node1[,node2,node3]
Back up the grub.cfg-*
files in
/srv/tftpboot/
as they will be overwritten when
running the cobbler-deploy playbook on the next step. You will need these
files if you need to reimage the nodes in the future.
Run the cobbler-deploy.yml
playbook, which will reset
Cobbler back to the default values:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost cobbler-deploy.yml
Secure Boot is a method used to restrict binary execution for booting the system. With this option enabled, system BIOS will only allow boot loaders with trusted cryptographic signatures to be executed, thus preventing malware from hiding embedded code in the boot chain. Each boot loader launched during the boot process is digitally signed and that signature is validated against a set of trusted certificates embedded in the UEFI BIOS. Secure Boot is completely implemented in the BIOS and does not require special hardware.
Thus Secure Boot is:
Intended to prevent boot-sector malware or kernel code injection.
Hardware-based code signing.
Extension of the UEFI BIOS architecture.
Optional with the ability to enable or disable it through the BIOS.
In Boot Options of RBSU, UEFI Mode
and
should be Enabled
>.
Secure Boot is enabled at
› › › › .This section outlines the steps needed to manually provision a SLES node so that it can be added to a new or existing HPE Helion OpenStack 8 cloud.
Take note of the IP address of the Cloud Lifecycle Manager node. It will be used below during Section 19.4.6, “Add zypper repository”.
Mount or copy the contents of
SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso
to
/srv/www/suse-12.3/x86_64/repos/ardana/sles12/zypper/OS/
If you choose to mount an ISO, we recommend creating an /etc/fstab
entry to
ensure the ISO is mounted after a reboot.
Install SUSE Linux Enterprise Server 12 SP3 using the standard iso
(SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso
)
Boot the SUSE Linux Enterprise Server 12 SP3 ISO.
Agree to the license
Edit the network settings, enter the the management network IP address recorded earlier. It is not necessary to enter a
/. For product registration to work correctly, you must provide a DNS server. Enter the IP address and the .
Additional System Probing
will occur.
On the Registration
page, you can skip registration if
the database server does not have a external interface or if there is no
SMT server on the MGMT LAN.
No Add On Products
are needed.
For System Role
, select . Do not select .
Partitioning
Select
and to clear .
Delete all Volume Groups
.
Under the root of the directory tree, delete
/dev/sda
.
Delete any other partitions on any other drives.
sda
, called
ardana
, with a
of 250MB.
Add an FAT
and mounted at
.
with all the remaining space ( ). The role for this partition is . It should not be mounted. It should not be formatted.
Select /dev/sda2
called ardana-vg
.
Add an LV to ardana-vg
called root
,
Type
of ,
of 50GB, . Format as
and mount at /
.
Acknowledge the warning about having no swap partition.
Press Suggested
Partitioning
page.
Pick your Hardware Clock
Set to UTC
.
Create a user named ardana
and a password for
system administrator
. Do not check .
On the Installation Settings
page:
Disable firewall
Enable SSH service
Set text
as the .
Press Confirm
Installation
with the button.
Installation will begin and the system will reboot automatically when installation is complete.
When the system is booted, log in as root
, using the
system administrator set during installation.
Set up the ardana
user and add
ardana
to the sudoers
group.
root #
useradd -s /bin/bash -d /var/lib/ardana -m ardanaroot #
passwd ardana
Enter and retype the password for user ardana
.
root #
echo "ardana ALL=(ALL) NOPASSWD:ALL" | sudo tee -a \
/etc/sudoers.d/ardana
Add an ardana group (id 1000) and change group owner to
ardana
.
root #
groupadd --gid 1000 ardanaroot #
chown -R ardana:ardana /var/lib/ardana
Disconnect the installation ISO. List repositories and remove the repository that was used for the installation.
root #
zypper lr
Identify the Name
of the repository to remove.
root #
zypper rr REPOSITORY_NAME
Copy the SSH key from the Cloud Lifecycle Manager.
root #
ssh-copy-id ardana@DEPLOYER_IP_ADDRESS
Log in to the SLES via SSH.
Continue with the site.yml
playbook to scale out
the node.
Use the ip addr
command to find out what network
devices are on your system:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether f0:92:1c:05:89:70 brd ff:ff:ff:ff:ff:ff inet 10.13.111.178/26 brd 10.13.111.191 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::f292:1cff:fe05:8970/64 scope link valid_lft forever preferred_lft forever 3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether f0:92:1c:05:89:74 brd ff:ff:ff:ff:ff:ff
Identify the one that matches the MAC address of your server and edit the
corresponding config file in
/etc/sysconfig/network-scripts
.
vi /etc/sysconfig/network-scripts/ifcfg-eno1
Edit the IPADDR
and NETMASK
values
to match your environment. Note that the IPADDR
is used
in the corresponding stanza in servers.yml
. You may
also need to set BOOTPROTO
to none
.
TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=eno1 UUID=36060f7a-12da-469b-a1da-ee730a3b1d7c DEVICE=eno1 ONBOOT=yes NETMASK=255.255.255.192 IPADDR=10.13.111.14
[OPTIONAL] Reboot your SLES node and ensure that it can be accessed from the Cloud Lifecycle Manager.
ardana
user and home directory #useradd -m ardana passwd ardana
ardana
to sudo
without password #Setting up sudo on SLES is covered in the SLES Administration Guide at https://documentation.suse.com/sles/12-SP5/single-html/SLES-admin/#sec-sudo-conf.
The recommendation is to create user specific sudo
config files under
/etc/sudoers.d
, therefore creating an /etc/sudoers.d/ardana
config file with
the following content will allow sudo commands without the requirement of a
password.
ardana ALL=(ALL) NOPASSWD:ALL
Using the ISO-based repositories created above, add the zypper repositories.
Follow these steps. Update the value of deployer_ip as necessary.
deployer_ip=192.168.10.254tux >
sudo zypper addrepo --no-gpgcheck --refresh http://$deployer_ip:79/ardana/sles12/zypper/OS SLES-OStux >
sudo zypper addrepo --no-gpgcheck --refresh http://$deployer_ip:79/ardana/sles12/zypper/SDK SLES-SDK
To verify that the repositories have been added, run:
tux >
sudo zypper repos --detail
For more information about Zypper, see the SLES Administration Guide at https://documentation.suse.com/sles/12-SP5/single-html/SLES-admin/#sec-zypper.
If you intend on attaching encrypted volumes to any of your SLES Compute nodes, install the cryptographic libraries through cryptsetup on each node. Run the following command to install the necessary cryptographic libraries:
tux >
sudo zypper in cryptsetup
As documented in Section 12.4, “Provisioning Your Baremetal Nodes”,
you need to add extra packages.
Ensure that openssh-server
,
python
,
and rsync
are installed.
Once you have started your installation using the Cloud Lifecycle Manager, or if
you are adding a SLES node to an existing cloud, you need to copy the
Cloud Lifecycle Manager public key to the SLES node. One way of doing this is to
copy the /home/ardana/.ssh/authorized_keys
from another
node in the cloud to the same location on the SLES node. If you are
installing a new cloud, this file will be available on the nodes after
running the bm-reimage.yml
playbook.
Ensure that there is global read access to the file
/home/ardana/.ssh/authorized_keys
.
Now test passwordless SSH from the deployer and check your ability to remotely execute sudo commands:
ssh ardana@IP_OF_SLES_NODE "sudo tail -5 /var/log/messages"