For information about supported hardware for multipathing, see Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 2 “Hardware and Software Support Matrix”, Section 2.2 “Supported Hardware Configurations”.
When exporting a LUN to a node for boot from SAN, you should ensure that LUN 0 is assigned to the LUN and configure any setup dialog that is necessary in the firmware to consume this LUN 0 for OS boot.
Any hosts that are connected to 3PAR storage must have a host
persona
of 2-generic-alua
set on the 3PAR.
Refer to the 3PAR documentation for the steps necessary to check this and
change if necessary.
iSCSI boot from SAN is not supported. For more information on the use of Cinder with multipath, see Section 23.1.3, “Multipath Support”.
To allow HPE Helion OpenStack 8 to use volumes from a SAN, you have to specify configuration options for both the installation and the OS configuration phase. In all cases, the devices that are utilized are devices for which multipath is configured.
For FC connected nodes and for FCoE nodes where the network processor used is from the Emulex family such as for the 650FLB, the following changes need to be made.
In each stanza of the servers.yml
insert a line
stating boot-from-san: true
- id: controller2 ip-addr: 192.168.10.4 role: CONTROLLER-ROLE server-group: RACK2 nic-mapping: HP-DL360-4PORT
This uses the disk /dev/mapper/mpatha
as the default
device on which to install the OS.
In the disk input models, specify the devices that will be used via their
multipath names (which will be of the form
/dev/mapper/mpatha
,
/dev/mapper/mpathb
, etc.).
volume-groups: - name: ardana-vg physical-volumes: # NOTE: 'sda_root' is a templated value. This value is checked in # os-config and replaced by the partition actually used on sda #for example sda1 or sda5 - /dev/mapper/mpatha_root ... - name: vg-comp physical-volumes: - /dev/mapper/mpathb
Instead of using Cobbler, you need to provision a baremetal node manually using the following procedure.
Assign a static IP to the node.
Use the ip addr
command to list active network
interfaces on your system:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether f0:92:1c:05:89:70 brd ff:ff:ff:ff:ff:ff inet 10.13.111.178/26 brd 10.13.111.191 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::f292:1cff:fe05:8970/64 scope link valid_lft forever preferred_lft forever 3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether f0:92:1c:05:89:74 brd ff:ff:ff:ff:ff:ff
Identify the network interface that matches the MAC address of your
server and edit the corresponding configuration file in
/etc/sysconfig/network-scripts
. For example, for
the eno1
interface, open the
/etc/sysconfig/network-scripts/ifcfg-eno1
file
and edit IPADDR and
NETMASK values to match your environment.
Note that the IPADDR is used in the
corresponding stanza in servers.yml
. You may also
need to set BOOTPROTO
to none
:
TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=eno1 UUID=36060f7a-12da-469b-a1da-ee730a3b1d7c DEVICE=eno1 ONBOOT=yes NETMASK=255.255.255.192 IPADDR=10.13.111.14
Reboot the SLES node and ensure that it can be accessed from the Cloud Lifecycle Manager.
Add the ardana
user and home directory:
root #
useradd -m -d /var/lib/ardana -U ardana
Allow the user ardana
to run sudo
without a password by creating the
/etc/sudoers.d/ardana
file with the following
configuration:
ardana ALL=(ALL) NOPASSWD:ALL
When you start installation using the Cloud Lifecycle Manager, or if you are adding a SLES
node to an existing cloud, you need to copy the Cloud Lifecycle Manager public key to the
SLES node to enable passwordless SSH access. One way of doing this is to
copy the file ~/.ssh/authorized_keys
from
another node in the cloud to the same location on the SLES node. If you
are installing a new cloud, this file will be available on the nodes after
running the bm-reimage.yml
playbook. Ensure that
there is global read access to the file
/var/lib/ardana/.ssh/authorized_keys
.
Use the following command to test passwordless SSH from the deployer and check the ability to remotely execute sudo commands:
ssh stack@SLES_NODE_IP "sudo tail -5 /var/log/messages"
Run the configuration processor:
tux >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost config-processor-run.yml
For automated installation, you can specify the required parameters. For example, the following command disables encryption by the configuration processor:
ansible-playbook -i hosts/localhost config-processor-run.yml \ -e encrypt="" -e rekey=""
Use the following playbook below to create a deployment directory:
tux >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost ready-deployment.yml
To ensure that all existing non-OS partitions on the nodes are wiped prior to
installation, you need to run the wipe_disks.yml
playbook. The wipe_disks.yml
playbook is only meant
to be run on systems immediately after running
bm-reimage.yml
. If used for any other case, it may
not wipe all of the expected partitions.
This step is not required if you are using clean machines.
Before you run the wipe_disks.yml
playbook, you need
to make the following changes in the deployment directory.
In the
~/scratch/ansible/next/ardana/ansible/roles/diskconfig/tasks/get_disk_info.yml
file, locate the following line:
shell: ls -1 /dev/mapper/ | grep "mpath" | grep -v {{ wipe_disks_skip_partition }}$ | grep -v {{ wipe_disks_skip_partition }}[0-9]
Replace it with:
shell: ls -1 /dev/mapper/ | grep "mpath" | grep -v {{ wipe_disks_skip_partition }}$ | grep -v {{ wipe_disks_skip_partition }}[0-9] | grep -v {{ wipe_disks_skip_partition }}_part[0-9]
In the
~/scratch/ansible/next/ardana/ansible/roles/multipath/tasks/install.yml
file, set the multipath_user_friendly_names
variable
value to yes
for all occurrences.
Run the wipe_disks.yml
playbook:
tux >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts wipe_disks.yml
If you have used an encryption password when running the configuration processor, use the command below, and enter the encryption password when prompted:
ardana >
ansible-playbook -i hosts/verb_hosts wipe_disks.yml --ask-vault-pass
Run the site.yml
playbook:
tux >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts site.yml
If you have used an encryption password when running the configuration processor, use the command below, and enter the encryption password when prompted:
ansible-playbook -i hosts/verb_hosts site.yml --ask-vault-pass
The step above runs osconfig
to configure the
cloud and ardana-deploy
to deploy the cloud.
Depending on the number of nodes, this step may take considerable time to
complete.
If you are using network cards such as Qlogic Flex Fabric 536 and 630 series, there are additional OS configuration steps to support the importation of LUNs as well as some restrictions on supported configurations.
The restrictions are:
Only one network card can be enabled in the system.
The FCoE interfaces on this card are dedicated to FCoE traffic. They cannot have IP addresses associated with them.
NIC mapping cannot be used.
In addition to the configuration options above, you also need to specify the FCoE interfaces for install and for os configuration. There are 3 places where you need to add additional configuration options for fcoe-support:
In servers.yml
, which is used for configuration of the
system during OS install, FCoE interfaces need to be specified for each
server. In particular, the mac addresses of the FCoE interfaces need to be
given, not the symbolic name (for example,
eth2
).
- id: compute1 ip-addr: 10.245.224.201 role: COMPUTE-ROLE server-group: RACK2 mac-addr: 6c:c2:17:33:4c:a0 ilo-ip: 10.1.66.26 ilo-user: linuxbox ilo-password: linuxbox123 boot-from-san: True fcoe-interfaces: - 6c:c2:17:33:4c:a1 - 6c:c2:17:33:4c:a9
NIC mapping cannot be used.
For the osconfig phase, you will need to specify the
fcoe-interfaces
as a peer of
network-interfaces
in the
net_interfaces.yml
file:
- name: CONTROLLER-INTERFACES fcoe-interfaces: - name: fcoe devices: - eth2 - eth3 network-interfaces: - name: eth0 device: name: eth0 network-groups: - EXTERNAL-API - EXTERNAL-VM - GUEST - MANAGEMENT
The MAC addresses specified in the fcoe-interfaces
stanza in servers.yml
must correspond to the
symbolic names used in the fcoe-interfaces
stanza in
net_interfaces.yml
.
Also, to satisfy the FCoE restriction outlined in
Section 6.3, “QLogic FCoE restrictions and additional configurations” above, there can be no overlap between the
devices in fcoe-interfaces
and those in
network-interfaces
in the
net_interfaces.yml
file. In the example,
eth2
and eth3
are
fcoe-interfaces
while eth0
is in
network-interfaces
.
As part of the initial install from an iso, additional parameters need to be supplied on the kernel command line:
multipath=true partman-fcoe/interfaces=<mac address1>,<mac address2> disk-detect/fcoe/enable=true --- quiet
Since NIC mapping is not used to guarantee order of the networks across the
system the installer will remap the network interfaces in a deterministic
fashion as part of the install. As part of the installer dialogue, if DHCP
is not configured for the interface, it is necessary to confirm that the
appropriate interface is assigned the ip address. The network interfaces may
not be at the names expected when installing via an ISO. When you are asked
to apply an IP address to an interface, press Alt–F2 and in the console
window, run the command ip a
to examine the interfaces
and their associated MAC addresses. Make a note of the interface name with
the expected MAC address and use this in the subsequent dialog. Press
Alt–F1 to
return to the installation screen. You should note that the names of the
interfaces may have changed after the installation completes. These names
are used consistently in any subsequent operations.
Therefore, even if FCoE is not used for boot from SAN (for example for
cinder), then it is recommended that fcoe-interfaces
be
specified as part of install (without the multipath or disk detect options).
Alternatively, you need to run
osconfig-fcoe-reorder.yml
before
site.yml
or osconfig-run.yml
is
invoked to reorder the networks in a similar manner to the installer. In
this case, the nodes will need to be manually rebooted for the network
reorder to take effect. Run osconfig-fcoe-reorder.yml
in the following scenarios:
If you have used a third-party installer to provision your bare-metal nodes
If you are booting from a local disk (that is one that is not presented from the SAN) but you want to use FCoE later, for example, for cinder.
To run the command:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts osconfig-fcoe-reorder.yml
If you do not run osconfig-fcoe-reorder.yml
, you will
encounter a failure in osconfig-run.yml
.
If you are booting from a local disk, the LUNs that will be imported over
FCoE will not be visible before site.yml
or
osconfig-run.yml
has been run. However, if you need to
import the LUNs before this, for instance, in scenarios where you need to
run wipe_disks.yml
(run this only after first running
bm-reimage.yml
), then you can run the
fcoe-enable
playbook across the nodes in question. This
will configure FCoE and import the LUNs presented to the nodes.
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/verb_hosts fcoe-enable.yml
When installing a Red Hat compute host, the names of the network interfaces
will have names like ens1f2
rather than
eth2
therefore a separate role and associated
network-interfaces
and fcoe-interfaces
descriptions will have to be provided in the input model for the Red Hat
compute hosts. Here are some excerpts highlighting the changes required:
net_interfaces.yml
- name: RHEL-COMPUTE-INTERFACES
fcoe-interfaces:
- name: fcoe
devices:
- ens1f2
- ens1f3
network-interfaces:
- name: ens1f0
device:
name: ens1f0
network-groups:
- EXTERNAL-VM
- GUEST
- MANAGEMENT
control_plane.yml
- name: rhel-compute
resource-prefix: rhcomp
server-role: RHEL-COMPUTE-ROLE
allocation-policy: any
min-count: 0
service-components:
- ntp-client
- nova-compute
- nova-compute-kvm
- neutron-l3-agent
- neutron-metadata-agent
- neutron-openvswitch-agent
- neutron-lbaasv2-agent
server_roles.yml
- name: RHEL-COMPUTE-ROLE
interface-model: RHEL-COMPUTE-INTERFACES
disk-model: COMPUTE-DISKS
servers.yml
- id: QlogicFCoE-Cmp2 ip-addr: 10.245.224.204 role: RHEL-COMPUTE-ROLE server-group: RACK2 mac-addr: "6c:c2:17:33:4c:a0" ilo-ip: 10.1.66.26 ilo-password: linuxbox123 ilo-user: linuxbox boot-from-san: True fcoe-interfaces: - 6c:c2:17:33:4c:a1 - 6c:c2:17:33:4c:a9
During manual installation of SUSE Linux Enterprise Server 12 SP3, select the desired SAN disk and
create an LVM partitioning scheme that meets HPE Helion OpenStack requirements,
that is it has an ardana-vg
volume group and an
ardana-vg-root
logical volume. For further information
on partitioning, see
Section 3.3, “Partitioning”.
After the installation is completed and the system is booted up, open the
file /etc/multipath.conf
and edit the defaults as
follows:
defaults { user_friendly_names yes bindings_file "/etc/multipath/bindings" }
Open the /etc/multipath/bindings
file and map the
expected device name to the SAN disk selected during installation. In
HPE Helion OpenStack, the naming convention is mpatha
,
mpathb
, and so on. For example:
mpatha-part1 360000000030349030-part1 mpatha-part2 360000000030349030-part2 mpatha-part3 360000000030349030-part3 mpathb-part1 360000000030349000-part1 mpathb-part2 360000000030349000-part2
Reboot the machine to enable the changes.