Applies to HPE Helion OpenStack 8

19 Installing SLES Compute

19.1 SLES Compute Node Installation Overview

HPE Helion OpenStack 8 supports SLES compute nodes, specifically SUSE Linux Enterprise Server 12 SP3. HPE does not ship a SLES ISO with HPE Helion OpenStack so you will need to download a copy of the SLES ISO (SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso) and the SLES SDK ISO (SLE-12-SP3-SDK-DVD-x86_64-GM-DVD1.iso) from SUSE. You can use the following link to download the ISO. To do so, either log in or create a SUSE account before downloading: https://www.suse.com/products/server/download/.

There are two approaches for deploying SLES compute nodes in HPE Helion OpenStack:

  • Using the Cloud Lifecycle Manager to automatically deploy SLES Compute Nodes.

  • Provisioning SLES nodes yourself, either manually or using a third-party tool, and then providing the relevant information to the Cloud Lifecycle Manager.

These two approaches can be used whether you are installing a cloud for the first time or adding a compute node to an existing cloud. Regardless of your approach, you should be certain to register your SLES compute nodes in order to get product updates as they come available. For more information, see Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 1 “Registering SLES”.

19.2 SLES Support

SUSE Linux Enterprise Server (SLES) Host OS KVM and/or supported SLES guests have been tested and qualified by HPE to run on HPE Helion OpenStack. HPE is one of largest SUSE OEMs with follow-the-sun global support coverage.

  • One Number to Call

    HPE customers who have purchased both HPE Helion OpenStack and SLES subscriptions with support from HPE will have one number to call for troubleshooting, fault isolation and support from HPE technical support specialists in HPE Helion OpenStack and SUSE technologies. If the problem is isolated to SLES software itself the issue will be replicated on a SUSE certified platform and escalated to SUSE for resolution.

  • A Dual Support Model

    HPE will troubleshoot and fault isolate an issue at the HPE Helion OpenStack software level. If HPE Helion OpenStack software is excluded as the cause of the problem, then customers who did not purchase SLES support from HPE will be directed to the vendor from whom they purchased SLES for continued support.

19.3 Using the Cloud Lifecycle Manager to Deploy SLES Compute Nodes

The method used for deploying SLES compute nodes using Cobbler on the Cloud Lifecycle Manager uses legacy BIOS.

Note
Note

UEFI and Secure boot are not supported on SLES Compute.

19.3.1 Deploying legacy BIOS SLES Compute nodes

The installation process for legacy BIOS SLES Compute nodes is similar to that described in Chapter 12, Installing Mid-scale and Entry-scale KVM with some additional requirements:

  • The standard SLES ISO (SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso) must be accessible as ~/sles12sp3.iso. Rename the ISO or create a symbolic link:

    mv SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso ~/sles12sp3.iso
  • You must identify the node(s) on which you want to install SLES, by adding the key/value pair distro-id: sles12sp3-x86_64 to server details in servers.yml. If there are any network interface or disk layout differences in the new server compared to the servers already in the model, you may also need to update net_interfaces.yml, server_roles.yml, disk_compute.yml and control_plane.yml. For more information on configuration of the Input Model for SLES, see Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 10 “Modifying Example Configurations for Compute Nodes”, Section 10.1 “SLES Compute Nodes”.

  • Run the playbook config-processor-run.yml to check for errors in the updated model.

  • Run the ready-deployment.yml playbook to build the new scratch directory.

  • Record the management network IP address that is used for the new server. It will be used in the installation process.

19.3.2 Deploying UEFI SLES compute nodes

Deploying UEFI nodes has been automated in the Cloud Lifecycle Manager and requires the following to be met:

  • All of your nodes using SLES must already be installed, either manually or via Cobbler.

  • Your input model should be configured for your SLES nodes, per the instructions at Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 10 “Modifying Example Configurations for Compute Nodes”, Section 10.1 “SLES Compute Nodes”.

  • You should have run the configuration processor and the ready-deployment.yml playbook.

Execute the following steps to re-image one or more nodes after you have run the ready-deployment.yml playbook.

  1. Run the following playbook, ensuring that you specify only your UEFI SLES nodes using the nodelist. This playbook will reconfigure Cobbler for the nodes listed.

    ardana > cd ~/scratch/ansible/next/ardana/ansible
    ardana > ansible-playbook prepare-sles-grub2.yml -e nodelist=node1[,node2,node3]
  2. Re-image the node(s), ensuring that you only specify your UEFI SLES nodes using the nodelist.

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost bm-reimage.yml \
    -e nodelist=node1[,node2,node3]
  3. Back up the grub.cfg-* files in /srv/tftpboot/ as they will be overwritten when running the cobbler-deploy playbook on the next step. You will need these files if you need to reimage the nodes in the future.

  4. Run the cobbler-deploy.yml playbook, which will reset Cobbler back to the default values:

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost cobbler-deploy.yml

19.3.2.1 UEFI Secure Boot

Secure Boot is a method used to restrict binary execution for booting the system. With this option enabled, system BIOS will only allow boot loaders with trusted cryptographic signatures to be executed, thus preventing malware from hiding embedded code in the boot chain. Each boot loader launched during the boot process is digitally signed and that signature is validated against a set of trusted certificates embedded in the UEFI BIOS. Secure Boot is completely implemented in the BIOS and does not require special hardware.

Thus Secure Boot is:

  • Intended to prevent boot-sector malware or kernel code injection.

  • Hardware-based code signing.

  • Extension of the UEFI BIOS architecture.

  • Optional with the ability to enable or disable it through the BIOS.

In Boot Options of RBSU, Boot Mode needs to be set to UEFI Mode and UEFI Optimized Boot should be Enabled>.

Secure Boot is enabled at System Configuration › BIOS/Platform Configuration (RBSU) › Server Security › Secure Boot Configuration › Secure Boot Enforcement.

19.4 Provisioning SLES Yourself

This section outlines the steps needed to manually provision a SLES node so that it can be added to a new or existing HPE Helion OpenStack 8 cloud.

19.4.1 Configure Cloud Lifecycle Manager to Enable SLES

  1. Take note of the IP address of the Cloud Lifecycle Manager node. It will be used below during Section 19.4.6, “Add zypper repository”.

  2. Mount or copy the contents of SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso to /srv/www/suse-12.3/x86_64/repos/ardana/sles12/zypper/OS/

Note
Note

If you choose to mount an ISO, we recommend creating an /etc/fstab entry to ensure the ISO is mounted after a reboot.

19.4.2 Install SUSE Linux Enterprise Server 12 SP3

Install SUSE Linux Enterprise Server 12 SP3 using the standard iso (SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso)

  1. Boot the SUSE Linux Enterprise Server 12 SP3 ISO.

  2. Agree to the license

  3. Edit the network settings, enter the the management network IP address recorded earlier. It is not necessary to enter a Hostname/. For product registration to work correctly, you must provide a DNS server. Enter the Name Server IP address and the Default IPv4 Gateway.

  4. Additional System Probing will occur.

  5. On the Registration page, you can skip registration if the database server does not have a external interface or if there is no SMT server on the MGMT LAN.

  6. No Add On Products are needed.

  7. For System Role, select Default System. Do not select KVM Virtualization Host.

  8. Partitioning

    1. Select Expert Partitioner and Rescan Devices to clear Proposed Settings.

    2. Delete all Volume Groups.

    3. Under the root of the directory tree, delete /dev/sda.

    4. Delete any other partitions on any other drives.

    5. Add Partition under sda, called ardana, with a Custom Size of 250MB.

    6. Add an EFI Boot Partition. Partition should be formatted as FAT and mounted at /boot/efi.

    7. Add Partition with all the remaining space (Maximum Size). The role for this partition is Raw Volume (unformatted). It should not be mounted. It should not be formatted.

    8. Select Volume Management and add a volume group to /dev/sda2 called ardana-vg.

    9. Add an LV to ardana-vg called root, Type of Normal Volume, Custom Size of 50GB, Raw Volume (unformatted). Format as Ext4 File System and mount at /.

    10. Acknowledge the warning about having no swap partition.

    11. Press Next on the Suggested Partitioning page.

  9. Pick your Time Zone and check Hardware Clock Set to UTC.

  10. Create a user named ardana and a password for system administrator. Do not check Automatic Login.

  11. On the Installation Settings page:

    • Disable firewall

    • Enable SSH service

    • Set text as the Default systemd target.

  12. Press Install and Confirm Installation with the Install button.

  13. Installation will begin and the system will reboot automatically when installation is complete.

  14. When the system is booted, log in as root, using the system administrator set during installation.

  15. Set up the ardana user and add ardana to the sudoers group.

    root # useradd -s /bin/bash -d /var/lib/ardana -m
        ardana
    root # passwd ardana

    Enter and retype the password for user ardana.

    root # echo "ardana ALL=(ALL) NOPASSWD:ALL" | sudo tee -a \
        /etc/sudoers.d/ardana
  16. Add an ardana group (id 1000) and change group owner to ardana.

    root # groupadd --gid 1000 ardana
    root # chown -R ardana:ardana /var/lib/ardana
  17. Disconnect the installation ISO. List repositories and remove the repository that was used for the installation.

    root # zypper lr

    Identify the Name of the repository to remove.

    root # zypper rr REPOSITORY_NAME
  18. Copy the SSH key from the Cloud Lifecycle Manager.

    root # ssh-copy-id ardana@DEPLOYER_IP_ADDRESS
  19. Log in to the SLES via SSH.

  20. Continue with the site.yml playbook to scale out the node.

19.4.3 Assign a static IP

  1. Use the ip addr command to find out what network devices are on your system:

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
        link/ether f0:92:1c:05:89:70 brd ff:ff:ff:ff:ff:ff
        inet 10.13.111.178/26 brd 10.13.111.191 scope global eno1
           valid_lft forever preferred_lft forever
        inet6 fe80::f292:1cff:fe05:8970/64 scope link
           valid_lft forever preferred_lft forever
    3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
        link/ether f0:92:1c:05:89:74 brd ff:ff:ff:ff:ff:ff
  2. Identify the one that matches the MAC address of your server and edit the corresponding config file in /etc/sysconfig/network-scripts.

    vi /etc/sysconfig/network-scripts/ifcfg-eno1
  3. Edit the IPADDR and NETMASK values to match your environment. Note that the IPADDR is used in the corresponding stanza in servers.yml. You may also need to set BOOTPROTO to none.

    TYPE=Ethernet
    BOOTPROTO=none
    DEFROUTE=yes
    PEERDNS=yes
    PEERROUTES=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_PEERDNS=yes
    IPV6_PEERROUTES=yes
    IPV6_FAILURE_FATAL=no
    NAME=eno1
    UUID=36060f7a-12da-469b-a1da-ee730a3b1d7c
    DEVICE=eno1
    ONBOOT=yes
    NETMASK=255.255.255.192
    IPADDR=10.13.111.14
  4. [OPTIONAL] Reboot your SLES node and ensure that it can be accessed from the Cloud Lifecycle Manager.

19.4.4 Add ardana user and home directory

useradd -m ardana
passwd ardana

19.4.5 Allow user ardana to sudo without password

Setting up sudo on SLES is covered in the SLES Administration Guide at https://documentation.suse.com/sles/12-SP5/single-html/SLES-admin/#sec-sudo-conf.

The recommendation is to create user specific sudo config files under /etc/sudoers.d, therefore creating an /etc/sudoers.d/ardana config file with the following content will allow sudo commands without the requirement of a password.

ardana ALL=(ALL) NOPASSWD:ALL

19.4.6 Add zypper repository

Using the ISO-based repositories created above, add the zypper repositories.

Follow these steps. Update the value of deployer_ip as necessary.

deployer_ip=192.168.10.254
tux > sudo zypper addrepo --no-gpgcheck --refresh http://$deployer_ip:79/ardana/sles12/zypper/OS SLES-OS
tux > sudo zypper addrepo --no-gpgcheck --refresh http://$deployer_ip:79/ardana/sles12/zypper/SDK SLES-SDK

To verify that the repositories have been added, run:

tux > sudo zypper repos --detail

For more information about Zypper, see the SLES Administration Guide at https://documentation.suse.com/sles/12-SP5/single-html/SLES-admin/#sec-zypper.

Warning
Warning

If you intend on attaching encrypted volumes to any of your SLES Compute nodes, install the cryptographic libraries through cryptsetup on each node. Run the following command to install the necessary cryptographic libraries:

tux > sudo zypper in cryptsetup

19.4.7 Add Required Packages

As documented in Section 12.4, “Provisioning Your Baremetal Nodes”, you need to add extra packages. Ensure that openssh-server, python, and rsync are installed.

19.4.8 Set up passwordless SSH access

Once you have started your installation using the Cloud Lifecycle Manager, or if you are adding a SLES node to an existing cloud, you need to copy the Cloud Lifecycle Manager public key to the SLES node. One way of doing this is to copy the /home/ardana/.ssh/authorized_keys from another node in the cloud to the same location on the SLES node. If you are installing a new cloud, this file will be available on the nodes after running the bm-reimage.yml playbook.

Important
Important

Ensure that there is global read access to the file /home/ardana/.ssh/authorized_keys.

Now test passwordless SSH from the deployer and check your ability to remotely execute sudo commands:

ssh ardana@IP_OF_SLES_NODE "sudo tail -5 /var/log/messages"