Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / Documentation / Deployment Guide using Cloud Lifecycle Manager / Pre-Installation / Boot from SAN and Multipath Configuration
Applies to SUSE OpenStack Cloud 9

18 Boot from SAN and Multipath Configuration

18.1 Introduction

For information about supported hardware for multipathing, see Section 2.2, “Supported Hardware Configurations”.

Important
Important

When exporting a LUN to a node for boot from SAN, you should ensure that LUN 0 is assigned to the LUN and configure any setup dialog that is necessary in the firmware to consume this LUN 0 for OS boot.

Important
Important

Any hosts that are connected to 3PAR storage must have a host persona of 2-generic-alua set on the 3PAR. Refer to the 3PAR documentation for the steps necessary to check this and change if necessary.

iSCSI boot from SAN is not supported. For more information on the use of cinder with multipath, see Section 35.1.3, “Multipath Support”.

To allow SUSE OpenStack Cloud 9 to use volumes from a SAN, you have to specify configuration options for both the installation and the OS configuration phase. In all cases, the devices that are utilized are devices for which multipath is configured.

18.2 Install Phase Configuration

For FC connected nodes and for FCoE nodes where the network processor used is from the Emulex family such as for the 650FLB, the following changes are required.

Instead of using Cobbler, you need to provision a baremetal node manually using the following procedure.

  1. During manual installation of SUSE Linux Enterprise Server 12 SP4, select the desired SAN disk and create an LVM partitioning scheme that meets SUSE OpenStack Cloud requirements: it has an ardana-vg volume group and an ardana-vg-root logical volume. For more information on partitioning, see Section 15.3, “Partitioning”.

  2. Open the /etc/multipath/bindings file and map the expected device name to the SAN disk selected during installation. In SUSE OpenStack Cloud, the naming convention is mpatha, mpathb, and so on. For example:

    mpatha-part1 360000000030349030-part1
    mpatha-part2 360000000030349030-part2
    mpatha-part3 360000000030349030-part3
    
    mpathb-part1 360000000030349000-part1
    mpathb-part2 360000000030349000-part2
  3. Reboot to enable the changes.

  4. Assign a static IP to the node:

    1. Use the ip addr command to list active network interfaces on your system:

      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host
             valid_lft forever preferred_lft forever
      2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
          link/ether f0:92:1c:05:89:70 brd ff:ff:ff:ff:ff:ff
          inet 10.13.111.178/26 brd 10.13.111.191 scope global eno1
             valid_lft forever preferred_lft forever
          inet6 fe80::f292:1cff:fe05:8970/64 scope link
             valid_lft forever preferred_lft forever
      3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
          link/ether f0:92:1c:05:89:74 brd ff:ff:ff:ff:ff:ff
    2. Identify the network interface that matches the MAC address of your server and edit the corresponding configuration file in /etc/sysconfig/network-scripts. For example, for the eno1 interface, open the /etc/sysconfig/network-scripts/ifcfg-eno1 file and edit IPADDR and NETMASK values to match your environment. The IPADDR is used in the corresponding stanza in servers.yml. You may also need to set BOOTPROTO to none:

      TYPE=Ethernet
      BOOTPROTO=none
      DEFROUTE=yes
      PEERDNS=yes
      PEERROUTES=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=yes
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=eno1
      UUID=360360aa-12aa-444a-a1aa-ee777a3a1a7a
      DEVICE=eno1
      ONBOOT=yes
      NETMASK=255.255.255.192
      IPADDR=10.10.100.10
    3. Reboot the SLES node and ensure that it can be accessed from the Cloud Lifecycle Manager.

  5. Add the ardana user and home directory:

    root # useradd -m -d /var/lib/ardana -U ardana
  6. Allow the user ardana to run sudo without a password by creating the /etc/sudoers.d/ardana file with the following configuration:

    ardana ALL=(ALL) NOPASSWD:ALL
  7. When you start installation using the Cloud Lifecycle Manager, or if you are adding a SLES node to an existing cloud, copy the Cloud Lifecycle Manager public key to the SLES node to enable passwordless SSH access. One way of doing this is to copy the file ~/.ssh/authorized_keys from another node in the cloud to the same location on the SLES node. If you are installing a new cloud, this file will be available on the nodes after running the bm-reimage.yml playbook. Ensure that there is global read access to the file /var/lib/ardana/.ssh/authorized_keys.

    Use the following command to test passwordless SSH from the deployer and check the ability to remotely execute sudo commands:

    ssh stack@SLES_NODE_IP "sudo tail -5 /var/log/messages"

18.2.1 Deploying the Cloud

  1. In openstack/my_cloud/config/multipath/multipath_settings.yml, set manual_multipath_conf to True so that multipath.conf on manually installed nodes is not overwritten.

  2. Commit the changes.

    tux > cd ~/openstack
    tux > git add -A
    tux > git commit -m "multipath config"
  3. Run config-processor and ready-deployment.

    tux > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost config-processor-run.yml
    tux > ansible-playbook -i hosts/localhost ready-deployment.yml
  4. Ensure that all existing non-OS partitions on the nodes are wiped prior to installation by running the wipe_disks.yml playbook.

    Note
    Note

    Confirm that your root partition disk is not listed in disks to be wiped.

    tux > cd ~/scratch/ansible/next/ardana/ansible
    tux > ansible-playbook -i hosts/verb_hosts wipe_disks.yml
  5. Run the site.yml playbook:

    tux > cd ~/scratch/ansible/next/ardana/ansible
    tux > ansible-playbook -i hosts/verb_hosts site.yml