31 Installing SLES Compute #
31.1 SLES Compute Node Installation Overview #
SUSE OpenStack Cloud 9 supports SLES compute nodes, specifically SUSE Linux Enterprise Server 12 SP4. SUSE
does not ship a SLES ISO with SUSE OpenStack Cloud so you will need to download a copy of
the SLES ISO (SLE-12-SP4-Server-DVD-x86_64-GM-DVD1.iso
)
from SUSE. You can use the following
link to download the ISO. To do so, either log in or create a SUSE
account before downloading:
https://www.suse.com/products/server/download/.
There are two approaches for deploying SLES compute nodes in SUSE OpenStack Cloud:
Using the Cloud Lifecycle Manager to automatically deploy SLES Compute Nodes.
Provisioning SLES nodes yourself, either manually or using a third-party tool, and then providing the relevant information to the Cloud Lifecycle Manager.
These two approaches can be used whether you are installing a cloud for the first time or adding a compute node to an existing cloud. Regardless of your approach, you should be certain to register your SLES compute nodes in order to get product updates as they come available. For more information, see Chapter 1, Registering SLES.
31.2 SLES Support #
SUSE Linux Enterprise Server (SLES) Host OS KVM and/or supported SLES guests have been tested and qualified by SUSE to run on SUSE OpenStack Cloud.
31.3 Using the Cloud Lifecycle Manager to Deploy SLES Compute Nodes #
The method used for deploying SLES compute nodes using Cobbler on the Cloud Lifecycle Manager uses legacy BIOS.
31.3.1 Deploying legacy BIOS SLES Compute nodes #
The installation process for legacy BIOS SLES Compute nodes is similar to that described in Chapter 24, Installing Mid-scale and Entry-scale KVM with some additional requirements:
The standard SLES ISO (SLE-12-SP4-Server-DVD-x86_64-GM-DVD1.iso) must be accessible as
~/sles12sp4.iso
. Rename the ISO or create a symbolic link:mv SLE-12-SP4-Server-DVD-x86_64-GM-DVD1.iso ~/sles12sp4.iso
You must identify the node(s) on which you want to install SLES, by adding the key/value pair
distro-id: sles12sp4-x86_64
to server details inservers.yml
. If there are any network interface or disk layout differences in the new server compared to the servers already in the model, you may also need to updatenet_interfaces.yml
,server_roles.yml
,disk_compute.yml
andcontrol_plane.yml
. For more information on configuration of the Input Model for SLES, see Section 10.1, “SLES Compute Nodes”.Run the playbook
config-processor-run.yml
to check for errors in the updated model.Run the
ready-deployment.yml
playbook to build the newscratch
directory.Record the management network IP address that is used for the new server. It will be used in the installation process.
31.3.2 Deploying UEFI SLES compute nodes #
Deploying UEFI nodes has been automated in the Cloud Lifecycle Manager and requires the following to be met:
All of your nodes using SLES must already be installed, either manually or via Cobbler.
Your input model should be configured for your SLES nodes, per the instructions at Section 10.1, “SLES Compute Nodes”.
You should have run the configuration processor and the
ready-deployment.yml
playbook.
Execute the following steps to re-image one or more nodes after you have run
the ready-deployment.yml
playbook.
Run the following playbook, ensuring that you specify only your UEFI SLES nodes using the nodelist. This playbook will reconfigure Cobbler for the nodes listed.
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook prepare-sles-grub2.yml -e nodelist=node1[,node2,node3]Re-image the node(s), ensuring that you only specify your UEFI SLES nodes using the nodelist.
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost bm-reimage.yml \ -e nodelist=node1[,node2,node3]Back up the
grub.cfg-*
files in/srv/tftpboot/
as they will be overwritten when running the cobbler-deploy playbook on the next step. You will need these files if you need to reimage the nodes in the future.Run the
cobbler-deploy.yml
playbook, which will reset Cobbler back to the default values:ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost cobbler-deploy.yml
31.3.2.1 UEFI Secure Boot #
Secure Boot is a method used to restrict binary execution for booting the system. With this option enabled, system BIOS will only allow boot loaders with trusted cryptographic signatures to be executed, thus preventing malware from hiding embedded code in the boot chain. Each boot loader launched during the boot process is digitally signed and that signature is validated against a set of trusted certificates embedded in the UEFI BIOS. Secure Boot is completely implemented in the BIOS and does not require special hardware.
Thus Secure Boot is:
Intended to prevent boot-sector malware or kernel code injection.
Hardware-based code signing.
Extension of the UEFI BIOS architecture.
Optional with the ability to enable or disable it through the BIOS.
In Boot Options of RBSU, UEFI Mode
and
should be Enabled
>.
Secure Boot is enabled at
› › › › .31.4 Provisioning SLES Yourself #
This section outlines the steps needed to manually provision a SLES node so that it can be added to a new or existing SUSE OpenStack Cloud 9 cloud.
31.4.1 Configure Cloud Lifecycle Manager to Enable SLES #
Take note of the IP address of the Cloud Lifecycle Manager node. It will be used below during Section 31.4.6, “Add zypper repository”.
Mount or copy the contents of
SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso
to/srv/www/suse-12.3/x86_64/repos/ardana/sles12/zypper/OS/
If you choose to mount an ISO, we recommend creating an /etc/fstab
entry to
ensure the ISO is mounted after a reboot.
31.4.2 Install SUSE Linux Enterprise Server 12 SP4 #
Install SUSE Linux Enterprise Server 12 SP4 using the standard iso
(SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso
)
Boot the SUSE Linux Enterprise Server 12 SP4 ISO.
Agree to the license
Edit the network settings, enter the the management network IP address recorded earlier. It is not necessary to enter a
/. For product registration to work correctly, you must provide a DNS server. Enter the IP address and the .Additional
System Probing
will occur.On the
Registration
page, you can skip registration if the database server does not have a external interface or if there is no SMT server on the MGMT LAN.No
Add On Products
are needed.For
System Role
, select . Do not select .Partitioning
Select
and to clear .Delete all
Volume Groups
.Under the root of the directory tree, delete
/dev/sda
.Delete any other partitions on any other drives.
sda
, calledardana
, with a of 250MB.Add an
. Partition should be formatted asFAT
and mounted at .Select
and add a volume group to/dev/sda2
calledardana-vg
.Add an LV to
ardana-vg
calledroot
,Type
of , of 50GB, . Format as and mount at/
.Acknowledge the warning about having no swap partition.
Press
on theSuggested Partitioning
page.
Pick your
and checkHardware Clock Set to UTC
.Create a user named
ardana
and a password forsystem administrator
. Do not check .On the
Installation Settings
page:Disable firewall
Enable SSH service
Set
text
as the .
Press
andConfirm Installation
with the button.Installation will begin and the system will reboot automatically when installation is complete.
When the system is booted, log in as
root
, using the system administrator set during installation.Set up the
ardana
user and addardana
to thesudoers
group.root #
useradd -s /bin/bash -d /var/lib/ardana -m ardanaroot #
passwd ardanaEnter and retype the password for user
ardana
.root #
echo "ardana ALL=(ALL) NOPASSWD:ALL" | sudo tee -a \ /etc/sudoers.d/ardanaAdd an ardana group (id 1000) and change group owner to
ardana
.root #
groupadd --gid 1000 ardanaroot #
chown -R ardana:ardana /var/lib/ardanaDisconnect the installation ISO. List repositories and remove the repository that was used for the installation.
root #
zypper lrIdentify the
Name
of the repository to remove.root #
zypper rr REPOSITORY_NAMECopy the SSH key from the Cloud Lifecycle Manager.
root #
ssh-copy-id ardana@DEPLOYER_IP_ADDRESSLog in to the SLES via SSH.
Continue with the
site.yml
playbook to scale out the node.
31.4.3 Assign a static IP #
Use the
ip addr
command to find out what network devices are on your system:1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether f0:92:1c:05:89:70 brd ff:ff:ff:ff:ff:ff inet 10.13.111.178/26 brd 10.13.111.191 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::f292:1cff:fe05:8970/64 scope link valid_lft forever preferred_lft forever 3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether f0:92:1c:05:89:74 brd ff:ff:ff:ff:ff:ff
Identify the one that matches the MAC address of your server and edit the corresponding config file in
/etc/sysconfig/network-scripts
.vi /etc/sysconfig/network-scripts/ifcfg-eno1
Edit the
IPADDR
andNETMASK
values to match your environment. Note that theIPADDR
is used in the corresponding stanza inservers.yml
. You may also need to setBOOTPROTO
tonone
.TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=eno1 UUID=36060f7a-12da-469b-a1da-ee730a3b1d7c DEVICE=eno1 ONBOOT=yes NETMASK=255.255.255.192 IPADDR=10.13.111.14
[OPTIONAL] Reboot your SLES node and ensure that it can be accessed from the Cloud Lifecycle Manager.
31.4.4 Add ardana
user and home directory #
useradd -m ardana passwd ardana
31.4.5 Allow user ardana
to sudo
without password #
Setting up sudo on SLES is covered in the SLES Administration Guide at https://documentation.suse.com/sles/15-SP1/single-html/SLES-admin/#sec-sudo-conf.
The recommendation is to create user specific sudo
config files under
/etc/sudoers.d
, therefore creating an /etc/sudoers.d/ardana
config file with
the following content will allow sudo commands without the requirement of a
password.
ardana ALL=(ALL) NOPASSWD:ALL
31.4.6 Add zypper repository #
Using the ISO-based repositories created above, add the zypper repositories.
Follow these steps. Update the value of deployer_ip as necessary.
deployer_ip=192.168.10.254tux >
sudo zypper addrepo --no-gpgcheck --refresh http://$deployer_ip:79/ardana/sles12/zypper/OS SLES-OStux >
sudo zypper addrepo --no-gpgcheck --refresh http://$deployer_ip:79/ardana/sles12/zypper/SDK SLES-SDK
To verify that the repositories have been added, run:
tux >
sudo zypper repos --detail
For more information about Zypper, see the SLES Administration Guide at https://documentation.suse.com/sles/15-SP1/single-html/SLES-admin/#sec-zypper.
If you intend on attaching encrypted volumes to any of your SLES Compute nodes, install the cryptographic libraries through cryptsetup on each node. Run the following command to install the necessary cryptographic libraries:
tux >
sudo zypper in cryptsetup
31.4.7 Add Required Packages #
As documented in Section 24.4, “Provisioning Your Baremetal Nodes”,
you need to add extra packages.
Ensure that openssh-server
,
python
,
and rsync
are installed.
31.4.8 Set up passwordless SSH access #
Once you have started your installation using the Cloud Lifecycle Manager, or if
you are adding a SLES node to an existing cloud, you need to copy the
Cloud Lifecycle Manager public key to the SLES node. One way of doing this is to
copy the /home/ardana/.ssh/authorized_keys
from another
node in the cloud to the same location on the SLES node. If you are
installing a new cloud, this file will be available on the nodes after
running the bm-reimage.yml
playbook.
Ensure that there is global read access to the file
/home/ardana/.ssh/authorized_keys
.
Now test passwordless SSH from the deployer and check your ability to remotely execute sudo commands:
ssh ardana@IP_OF_SLES_NODE "sudo tail -5 /var/log/messages"