Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
ContentsContents
Deployment Guide using Cloud Lifecycle Manager
  1. I Planning an Installation using Cloud Lifecycle Manager
    1. 1 Registering SLES
    2. 2 Hardware and Software Support Matrix
    3. 3 Recommended Hardware Minimums for the Example Configurations
    4. 4 High Availability
  2. II Cloud Lifecycle Manager Overview
    1. 5 Input Model
    2. 6 Configuration Objects
    3. 7 Other Topics
    4. 8 Configuration Processor Information Files
    5. 9 Example Configurations
    6. 10 Modifying Example Configurations for Compute Nodes
    7. 11 Modifying Example Configurations for Object Storage using Swift
    8. 12 Alternative Configurations
  3. III Pre-Installation
    1. 13 Overview
    2. 14 Pre-Installation Checklist
    3. 15 Installing the Cloud Lifecycle Manager server
    4. 16 Installing and Setting Up an SMT Server on the Cloud Lifecycle Manager server (Optional)
    5. 17 Software Repository Setup
    6. 18 Boot from SAN and Multipath Configuration
  4. IV Cloud Installation
    1. 19 Overview
    2. 20 Preparing for Stand-Alone Deployment
    3. 21 Installing with the Install UI
    4. 22 Using Git for Configuration Management
    5. 23 Installing a Stand-Alone Cloud Lifecycle Manager
    6. 24 Installing Mid-scale and Entry-scale KVM
    7. 25 DNS Service Installation Overview
    8. 26 Magnum Overview
    9. 27 Installing ESX Computes and OVSvAPP
    10. 28 Integrating NSX for vSphere
    11. 29 Installing Baremetal (Ironic)
    12. 30 Installation for SUSE OpenStack Cloud Entry-scale Cloud with Swift Only
    13. 31 Installing SLES Compute
    14. 32 Installing manila and Creating manila Shares
    15. 33 Installing SUSE CaaS Platform heat Templates
    16. 34 Installing SUSE CaaS Platform v4 using terraform
    17. 35 Integrations
    18. 36 Troubleshooting the Installation
    19. 37 Troubleshooting the ESX
  5. V Post-Installation
    1. 38 Post Installation Tasks
    2. 39 UI Verification
    3. 40 Installing OpenStack Clients
    4. 41 Configuring Transport Layer Security (TLS)
    5. 42 Configuring Availability Zones
    6. 43 Configuring Load Balancer as a Service
    7. 44 Other Common Post-Installation Tasks
  6. VI Support
    1. 45 FAQ
    2. 46 Support
    3. 47 Applying PTFs (Program Temporary Fixes) Provided by SUSE L3 Support
    4. 48 Testing PTFs (Program Temporary Fixes) on a Single Node
Navigation
Applies to SUSE OpenStack Cloud 9

18 Boot from SAN and Multipath Configuration Edit source

18.1 Introduction Edit source

For information about supported hardware for multipathing, see Section 2.2, “Supported Hardware Configurations”.

Important
Important

When exporting a LUN to a node for boot from SAN, you should ensure that LUN 0 is assigned to the LUN and configure any setup dialog that is necessary in the firmware to consume this LUN 0 for OS boot.

Important
Important

Any hosts that are connected to 3PAR storage must have a host persona of 2-generic-alua set on the 3PAR. Refer to the 3PAR documentation for the steps necessary to check this and change if necessary.

iSCSI boot from SAN is not supported. For more information on the use of cinder with multipath, see Section 35.1.3, “Multipath Support”.

To allow SUSE OpenStack Cloud 9 to use volumes from a SAN, you have to specify configuration options for both the installation and the OS configuration phase. In all cases, the devices that are utilized are devices for which multipath is configured.

18.2 Install Phase Configuration Edit source

For FC connected nodes and for FCoE nodes where the network processor used is from the Emulex family such as for the 650FLB, the following changes are required.

Instead of using Cobbler, you need to provision a baremetal node manually using the following procedure.

  1. During manual installation of SUSE Linux Enterprise Server 12 SP4, select the desired SAN disk and create an LVM partitioning scheme that meets SUSE OpenStack Cloud requirements: it has an ardana-vg volume group and an ardana-vg-root logical volume. For more information on partitioning, see Section 15.3, “Partitioning”.

  2. Open the /etc/multipath/bindings file and map the expected device name to the SAN disk selected during installation. In SUSE OpenStack Cloud, the naming convention is mpatha, mpathb, and so on. For example:

    mpatha-part1 360000000030349030-part1
    mpatha-part2 360000000030349030-part2
    mpatha-part3 360000000030349030-part3
    
    mpathb-part1 360000000030349000-part1
    mpathb-part2 360000000030349000-part2
  3. Reboot to enable the changes.

  4. Assign a static IP to the node:

    1. Use the ip addr command to list active network interfaces on your system:

      1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
          link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
          inet 127.0.0.1/8 scope host lo
             valid_lft forever preferred_lft forever
          inet6 ::1/128 scope host
             valid_lft forever preferred_lft forever
      2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
          link/ether f0:92:1c:05:89:70 brd ff:ff:ff:ff:ff:ff
          inet 10.13.111.178/26 brd 10.13.111.191 scope global eno1
             valid_lft forever preferred_lft forever
          inet6 fe80::f292:1cff:fe05:8970/64 scope link
             valid_lft forever preferred_lft forever
      3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
          link/ether f0:92:1c:05:89:74 brd ff:ff:ff:ff:ff:ff
    2. Identify the network interface that matches the MAC address of your server and edit the corresponding configuration file in /etc/sysconfig/network-scripts. For example, for the eno1 interface, open the /etc/sysconfig/network-scripts/ifcfg-eno1 file and edit IPADDR and NETMASK values to match your environment. The IPADDR is used in the corresponding stanza in servers.yml. You may also need to set BOOTPROTO to none:

      TYPE=Ethernet
      BOOTPROTO=none
      DEFROUTE=yes
      PEERDNS=yes
      PEERROUTES=yes
      IPV4_FAILURE_FATAL=no
      IPV6INIT=yes
      IPV6_AUTOCONF=yes
      IPV6_DEFROUTE=yes
      IPV6_PEERDNS=yes
      IPV6_PEERROUTES=yes
      IPV6_FAILURE_FATAL=no
      NAME=eno1
      UUID=360360aa-12aa-444a-a1aa-ee777a3a1a7a
      DEVICE=eno1
      ONBOOT=yes
      NETMASK=255.255.255.192
      IPADDR=10.10.100.10
    3. Reboot the SLES node and ensure that it can be accessed from the Cloud Lifecycle Manager.

  5. Add the ardana user and home directory:

    root # useradd -m -d /var/lib/ardana -U ardana
  6. Allow the user ardana to run sudo without a password by creating the /etc/sudoers.d/ardana file with the following configuration:

    ardana ALL=(ALL) NOPASSWD:ALL
  7. When you start installation using the Cloud Lifecycle Manager, or if you are adding a SLES node to an existing cloud, copy the Cloud Lifecycle Manager public key to the SLES node to enable passwordless SSH access. One way of doing this is to copy the file ~/.ssh/authorized_keys from another node in the cloud to the same location on the SLES node. If you are installing a new cloud, this file will be available on the nodes after running the bm-reimage.yml playbook. Ensure that there is global read access to the file /var/lib/ardana/.ssh/authorized_keys.

    Use the following command to test passwordless SSH from the deployer and check the ability to remotely execute sudo commands:

    ssh stack@SLES_NODE_IP "sudo tail -5 /var/log/messages"

18.2.1 Deploying the Cloud Edit source

  1. In openstack/my_cloud/config/multipath/multipath_settings.yml, set manual_multipath_conf to True so that multipath.conf on manually installed nodes is not overwritten.

  2. Commit the changes.

    tux > cd ~/openstack
    tux > git add -A
    tux > git commit -m "multipath config"
  3. Run config-processor and ready-deployment.

    tux > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost config-processor-run.yml
    tux > ansible-playbook -i hosts/localhost ready-deployment.yml
  4. Ensure that all existing non-OS partitions on the nodes are wiped prior to installation by running the wipe_disks.yml playbook.

    Note
    Note

    Confirm that your root partition disk is not listed in disks to be wiped.

    tux > cd ~/scratch/ansible/next/ardana/ansible
    tux > ansible-playbook -i hosts/verb_hosts wipe_disks.yml
  5. Run the site.yml playbook:

    tux > cd ~/scratch/ansible/next/ardana/ansible
    tux > ansible-playbook -i hosts/verb_hosts site.yml
Print this page