3 Provisioning RHEL #
This section outlines how to manually provision a RHEL node, so that it can be added to a new or existing cloud created with SUSE OpenStack Cloud.
3.1 Installing RHEL 7.5 #
Install RHEL 7.5 using the standard ISO
3.2 Assigning a Static IP #
Use the
ip addr
command to find out what network devices are on your system:1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether f0:92:1c:05:89:70 brd ff:ff:ff:ff:ff:ff inet 10.13.111.178/26 brd 10.13.111.191 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::f292:1cff:fe05:8970/64 scope link valid_lft forever preferred_lft forever 3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether f0:92:1c:05:89:74 brd ff:ff:ff:ff:ff:ff
Identify the entry that matches the MAC address of your server and edit the corresponding configuration file in
/etc/sysconfig/network-scripts
:vi /etc/sysconfig/network-scripts/ifcfg-eno1
Edit the
IPADDR
andNETMASK
values to match your environment. Note that theIPADDR
is used in the corresponding stanza inservers.yml
. You may also need to setBOOTPROTO
tonone
.TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=eno1 UUID=36060f7a-12da-469b-a1da-ee730a3b1d7c DEVICE=eno1 ONBOOT=yes NETMASK=255.255.255.192 IPADDR=10.13.111.14
(Optional) Reboot your RHEL node and ensure that it can be accessed from the Cloud Lifecycle Manager.
3.3 Adding The User and Home Directory for ardana
#
tux >
sudo useradd -m ardanatux >
sudo passwd ardana
3.4 Allow User ardana
to Use sudo
Without Password #
There are a number of different ways to achieve this. Here is one
possibility using the pre-existing
wheel
group.
Add the user
ardana
to thewheel
group.tux >
sudo usermod -aG wheel ardanaRun the command
visudo
.Uncomment the line specifying
NOPASSWD: ALL
for thewheel
group.## Allows people in group wheel to run all commands %wheel ALL=(ALL) ALL ## Same thing without a password %wheel ALL=(ALL) NOPASSWD: ALL
To facilitate using SSH from the deployer and running a command via
sudo
, comment the lines forrequiretty
and!visiblepw
# # Disable "ssh hostname sudo <cmd>", because it will show the password in clear. # You have to run "ssh -t hostname sudo <cmd>". # #Defaults requiretty # # Refuse to run if unable to disable echo on the tty. This setting should also be # changed in order to be able to use sudo without a tty. See requiretty above. # #Defaults !visiblepw
3.5 Setting Up a Yum Repository from a RHEL ISO #
This section is only required if RHEL node is set up manually, You need to
set up a Yum repository, either externally or locally, containing a RHEL
distribution supported by SUSE OpenStack Cloud. This repository must mirror the
entire product repository including the ResilientStorage
and HighAvailability
add-ons. To create this repository,
perform these steps in compute node:
Mount the RHEL ISO and expand it:
tux >
sudo mkdir /tmp/localrhel mkdir rhel7 sudo mount -o loop rhel7.iso /mnt cd rhel7 sudo tar cvf - . | (cd /tmp/localrhel; tar xvf -) cd .. sudo umount /mnt rm -r rhel7Create a repository file named
/etc/yum.repos.d/localrhel.repo
with the following contents:[localrhel] name=localrhel baseurl=file:///tmp/localrhel enabled=1 gpgcheck=0 [localrhel-1] name=localrhel-1 baseurl=file:///tmp/localrhel/addons/ResilientStorage enabled=1 gpgcheck=0 [localrhel-2] name=localrhel-2 baseurl=file:///tmp/localrhel/addons/HighAvailability enabled=1 gpgcheck=0
Run:
tux >
sudo yum clean all
3.6 Adding Required Packages #
Extra packages are required. Ensure that openssh-server, python, python-apt, and rsync are installed.
3.7 Setting Up Passwordless SSH Access #
After you have started your installation using the Cloud Lifecycle Manager, or if you are
adding a RHEL node to an existing cloud, you need to copy the deployer
public key to the RHEL node. One way of doing this is to copy the
~/.ssh/authorized_keys
from another node in
the cloud to the same location on the RHEL node. If you are installing a
new cloud, this file will be available on the nodes after running the
bm-reimage.yml
playbook.
Ensure that there is global read access to the file
~/.ssh/authorized_keys
.
Now test passwordless SSH from the deployer and check your ability to remotely execute sudo commands:
ssh ardana@IP_OF_RHEL_NODE "sudo tail -5 /var/log/messages"