For information about supported hardware for multipathing, see Section 2.2, “Supported Hardware Configurations”.
When exporting a LUN to a node for boot from SAN, you should ensure that LUN 0 is assigned to the LUN and configure any setup dialog that is necessary in the firmware to consume this LUN 0 for OS boot.
Any hosts that are connected to 3PAR storage must have a host
persona
of 2-generic-alua
set on the 3PAR.
Refer to the 3PAR documentation for the steps necessary to check this and
change if necessary.
iSCSI boot from SAN is not supported. For more information on the use of cinder with multipath, see Section 35.1.3, “Multipath Support”.
To allow SUSE OpenStack Cloud 9 to use volumes from a SAN, you have to specify configuration options for both the installation and the OS configuration phase. In all cases, the devices that are utilized are devices for which multipath is configured.
For FC connected nodes and for FCoE nodes where the network processor used is from the Emulex family such as for the 650FLB, the following changes are required.
Instead of using Cobbler, you need to provision a baremetal node manually using the following procedure.
During manual installation of SUSE Linux Enterprise Server 12 SP4, select the desired SAN disk and
create an LVM partitioning scheme that meets SUSE OpenStack Cloud requirements: it has
an ardana-vg
volume group and an
ardana-vg-root
logical volume. For more information on
partitioning, see Section 15.3, “Partitioning”.
Open the /etc/multipath/bindings
file and map the
expected device name to the SAN disk selected during installation. In
SUSE OpenStack Cloud, the naming convention is mpatha, mpathb, and so on. For example:
mpatha-part1 360000000030349030-part1 mpatha-part2 360000000030349030-part2 mpatha-part3 360000000030349030-part3 mpathb-part1 360000000030349000-part1 mpathb-part2 360000000030349000-part2
Reboot to enable the changes.
Assign a static IP to the node:
Use the ip addr
command to list active network
interfaces on your system:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether f0:92:1c:05:89:70 brd ff:ff:ff:ff:ff:ff inet 10.13.111.178/26 brd 10.13.111.191 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::f292:1cff:fe05:8970/64 scope link valid_lft forever preferred_lft forever 3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether f0:92:1c:05:89:74 brd ff:ff:ff:ff:ff:ff
Identify the network interface that matches the MAC address of your
server and edit the corresponding configuration file in
/etc/sysconfig/network-scripts
. For example, for
the eno1
interface, open the
/etc/sysconfig/network-scripts/ifcfg-eno1
file
and edit IPADDR and
NETMASK values to match your environment.
The IPADDR is used in the corresponding
stanza in servers.yml
. You may also need to set
BOOTPROTO
to none
:
TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=eno1 UUID=360360aa-12aa-444a-a1aa-ee777a3a1a7a DEVICE=eno1 ONBOOT=yes NETMASK=255.255.255.192 IPADDR=10.10.100.10
Reboot the SLES node and ensure that it can be accessed from the Cloud Lifecycle Manager.
Add the ardana
user and home directory:
root #
useradd -m -d /var/lib/ardana -U ardana
Allow the user ardana
to run sudo
without a password by creating the
/etc/sudoers.d/ardana
file with the following
configuration:
ardana ALL=(ALL) NOPASSWD:ALL
When you start installation using the Cloud Lifecycle Manager, or if you are adding a SLES
node to an existing cloud, copy the Cloud Lifecycle Manager public key to the SLES node to
enable passwordless SSH access. One way of doing this is to copy the file
~/.ssh/authorized_keys
from another node in the cloud
to the same location on the SLES node. If you are installing a new
cloud, this file will be available on the nodes after running the
bm-reimage.yml
playbook. Ensure that there is global
read access to the file
/var/lib/ardana/.ssh/authorized_keys
.
Use the following command to test passwordless SSH from the deployer and check the ability to remotely execute sudo commands:
ssh stack@SLES_NODE_IP "sudo tail -5 /var/log/messages"
In
openstack/my_cloud/config/multipath/multipath_settings.yml
,
set manual_multipath_conf
to True
so
that multipath.conf
on manually installed nodes is not
overwritten.
Commit the changes.
tux >
cd ~/openstacktux >
git add -Atux >
git commit -m "multipath config"
Run config-processor
and
ready-deployment
.
tux >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost config-processor-run.ymltux >
ansible-playbook -i hosts/localhost ready-deployment.yml
Ensure that all existing non-OS partitions on the nodes are wiped prior to
installation by running the wipe_disks.yml
playbook.
Confirm that your root partition disk is not listed in disks to be wiped.
tux >
cd ~/scratch/ansible/next/ardana/ansibletux >
ansible-playbook -i hosts/verb_hosts wipe_disks.yml
Run the site.yml
playbook:
tux >
cd ~/scratch/ansible/next/ardana/ansibletux >
ansible-playbook -i hosts/verb_hosts site.yml