Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Deployment Guide using Cloud Lifecycle Manager
  1. I Planning an Installation using Cloud Lifecycle Manager
    1. 1 Registering SLES
    2. 2 Hardware and Software Support Matrix
    3. 3 Recommended Hardware Minimums for the Example Configurations
    4. 4 High Availability
  2. II Cloud Lifecycle Manager Overview
    1. 5 Input Model
    2. 6 Configuration Objects
    3. 7 Other Topics
    4. 8 Configuration Processor Information Files
    5. 9 Example Configurations
    6. 10 Modifying Example Configurations for Compute Nodes
    7. 11 Modifying Example Configurations for Object Storage using Swift
    8. 12 Alternative Configurations
  3. III Pre-Installation
    1. 13 Overview
    2. 14 Pre-Installation Checklist
    3. 15 Installing the Cloud Lifecycle Manager server
    4. 16 Installing and Setting Up an SMT Server on the Cloud Lifecycle Manager server (Optional)
    5. 17 Software Repository Setup
    6. 18 Boot from SAN and Multipath Configuration
  4. IV Cloud Installation
    1. 19 Overview
    2. 20 Preparing for Stand-Alone Deployment
    3. 21 Installing with the Install UI
    4. 22 Using Git for Configuration Management
    5. 23 Installing a Stand-Alone Cloud Lifecycle Manager
    6. 24 Installing Mid-scale and Entry-scale KVM
    7. 25 DNS Service Installation Overview
    8. 26 Magnum Overview
    9. 27 Installing ESX Computes and OVSvAPP
    10. 28 Integrating NSX for vSphere
    11. 29 Installing Baremetal (Ironic)
    12. 30 Installation for SUSE OpenStack Cloud Entry-scale Cloud with Swift Only
    13. 31 Installing SLES Compute
    14. 32 Installing manila and Creating manila Shares
    15. 33 Installing SUSE CaaS Platform heat Templates
    16. 34 Installing SUSE CaaS Platform v4 using terraform
    17. 35 Integrations
    18. 36 Troubleshooting the Installation
    19. 37 Troubleshooting the ESX
  5. V Post-Installation
    1. 38 Post Installation Tasks
    2. 39 UI Verification
    3. 40 Installing OpenStack Clients
    4. 41 Configuring Transport Layer Security (TLS)
    5. 42 Configuring Availability Zones
    6. 43 Configuring Load Balancer as a Service
    7. 44 Other Common Post-Installation Tasks
  6. VI Support
    1. 45 FAQ
    2. 46 Support
    3. 47 Applying PTFs (Program Temporary Fixes) Provided by SUSE L3 Support
    4. 48 Testing PTFs (Program Temporary Fixes) on a Single Node
Applies to SUSE OpenStack Cloud 9

20 Preparing for Stand-Alone Deployment Edit source

20.1 Cloud Lifecycle Manager Installation Alternatives Edit source

The Cloud Lifecycle Manager can be installed on a Control Plane or on a stand-alone server.

  • Installing the Cloud Lifecycle Manager on a Control Plane is done during the process of deploying your Cloud. Your Cloud and the Cloud Lifecycle Manager are deployed together.

  • With a standalone Cloud Lifecycle Manager, you install the deployer first and then deploy your Cloud in a separate process. Either the Install UI or command line can be used to deploy a stand-alone Cloud Lifecycle Manager.

Each method is suited for particular needs. The best choice depends on your situation.

Stand-alone Deployer

  • + Compared to a Control Plane deployer, a stand-alone deployer is easier to backup and redeploy in case of disaster

  • + Separates cloud management from components being managed

  • + Does not use Control Plane resources

  • - Another server is required (less of a disadvantage if using a VM)

  • - Installation may be more complex than a Control Plane Cloud Lifecycle Manager

Control Plane Deployer

  • + Installation is usually simpler than installing a stand-alone deployer

  • + Requires fewer servers or VMs

  • - Could contend with workloads for resources

  • - Harder to redeploy in case of failure compared to stand-alone deployer

  • - There is a risk to the Cloud Lifecycle Manager when updating or modifying controllers

  • - Runs on one of the servers that is deploying or managing your Cloud


  • A Control Plane Cloud Lifecycle Manager is best for small, simple Cloud deployments.

  • With a larger, more complex cloud, a stand-alone deployer provides better recoverability and the separation of manager from managed components.

20.2 Installing a Stand-Alone Deployer Edit source

If you do not intend to install a stand-alone deployer, proceed to installing the Cloud Lifecycle Manager on a Control Plane.

20.2.1 Prepare for Cloud Installation Edit source

  1. Review the Chapter 14, Pre-Installation Checklist about recommended pre-installation tasks.

  2. Prepare the Cloud Lifecycle Manager node. The Cloud Lifecycle Manager must be accessible either directly or via ssh, and have SUSE Linux Enterprise Server 12 SP4 installed. All nodes must be accessible to the Cloud Lifecycle Manager. If the nodes do not have direct access to online Cloud subscription channels, the Cloud Lifecycle Manager node will need to host the Cloud repositories.

    1. If you followed the installation instructions for Cloud Lifecycle Manager server (see Chapter 15, Installing the Cloud Lifecycle Manager server), SUSE OpenStack Cloud software should already be installed. Double-check whether SUSE Linux Enterprise and SUSE OpenStack Cloud are properly registered at the SUSE Customer Center by starting YaST and running Software › Product Registration.

      If you have not yet installed SUSE OpenStack Cloud, do so by starting YaST and running Software › Product Registration › Select Extensions. Choose SUSE OpenStack Cloud and follow the on-screen instructions. Make sure to register SUSE OpenStack Cloud during the installation process and to install the software pattern patterns-cloud-ardana.

      tux > sudo zypper -n in patterns-cloud-ardana
    2. Ensure the SUSE OpenStack Cloud media repositories and updates repositories are made available to all nodes in your deployment. This can be accomplished either by configuring the Cloud Lifecycle Manager server as an SMT mirror as described in Chapter 16, Installing and Setting Up an SMT Server on the Cloud Lifecycle Manager server (Optional) or by syncing or mounting the Cloud and updates repositories to the Cloud Lifecycle Manager server as described in Chapter 17, Software Repository Setup.

    3. Configure passwordless sudo for the user created when setting up the node (as described in Section 15.4, “Creating a User”). Note that this is not the user ardana that will be used later in this procedure. In the following we assume you named the user cloud. Run the command visudo as user root and add the following line to the end of the file:


      Make sure to replace CLOUD with your user name choice.

    4. Set the password for the user ardana:

      tux > sudo passwd ardana
    5. Become the user ardana:

      tux > su - ardana
    6. Place a copy of the SUSE Linux Enterprise Server 12 SP4 .iso in the ardana home directory, var/lib/ardana, and rename it to sles12sp4.iso.

    7. Install the templates, examples, and working model directories:

      ardana > /usr/bin/ardana-init

20.2.2 Configuring for a Stand-Alone Deployer Edit source

The following steps are necessary to set up a stand-alone deployer whether you will be using the Install UI or command line.

  1. Copy the SUSE OpenStack Cloud Entry-scale KVM example input model to a stand-alone input model. This new input model will be edited so that it can be used as a stand-alone Cloud Lifecycle Manager installation.

    tux > cp -r ~/openstack/examples/entry-scale-kvm/* \
  2. Change to the new directory

    tux > cd ~/openstack/examples/entry-scale-kvm-stand-alone-deployer
  3. Edit the cloudConfig.yml file to change the name of the input model. This will make the model available both to the Install UI and to the command line installation process.

    Change name: entry-scale-kvm to name: entry-scale-kvm-stand-alone-deployer

  4. Change to the data directory.

  5. Make the following edits to your configuration files.


    The indentation of each of the input files is important and will cause errors if not done correctly. Use the existing content in each of these files as a reference when adding additional content for your Cloud Lifecycle Manager.

    • Update control_plane.yml to add the Cloud Lifecycle Manager.

    • Update server_roles.yml to add the Cloud Lifecycle Manager role.

    • Update net_interfaces.yml to add the interface definition for the Cloud Lifecycle Manager.

    • Create a disks_lifecycle_manager.yml file to define the disk layout for the Cloud Lifecycle Manager.

    • Update servers.yml to add the dedicated Cloud Lifecycle Manager node.

    Control_plane.yml: The snippet below shows the addition of a single node cluster into the control plane to host the Cloud Lifecycle Manager service.


    In addition to adding the new cluster, you also have to remove the Cloud Lifecycle Manager component from the cluster1 in the examples.

         - name: cluster0
           cluster-prefix: c0
           server-role: LIFECYCLE-MANAGER-ROLE
           member-count: 1
           allocation-policy: strict
             - lifecycle-manager
             - ntp-client
         - name: cluster1
           cluster-prefix: c1
           server-role: CONTROLLER-ROLE
           member-count: 3
           allocation-policy: strict
             - ntp-server

    This specifies a single node of role LIFECYCLE-MANAGER-ROLE hosting the Cloud Lifecycle Manager.

    Server_roles.yml: The snippet below shows the insertion of the new server roles definition:

          - name: LIFECYCLE-MANAGER-ROLE
            interface-model: LIFECYCLE-MANAGER-INTERFACES
            disk-model: LIFECYCLE-MANAGER-DISKS
          - name: CONTROLLER-ROLE

    This defines a new server role which references a new interface-model and disk-model to be used when configuring the server.

    net-interfaces.yml: The snippet below shows the insertion of the network-interface info:

            - name: BOND0
                 name: bond0
                     mode: active-backup
                     miimon: 200
                     primary: hed3
                 provider: linux
                     - name: hed3
                     - name: hed4
                 - MANAGEMENT

    This assumes that the server uses the same physical networking layout as the other servers in the example.

    disks_lifecycle_manager.yml: In the examples, disk-models are provided as separate files (this is just a convention, not a limitation) so the following should be added as a new file named disks_lifecycle_manager.yml:

          version: 2
         # Disk model to be used for Cloud Lifecycle Managers nodes
         # /dev/sda_root is used as a volume group for /, /var/log and /var/crash
         # sda_root is a templated value to align with whatever partition is really used
         # This value is checked in os config and replaced by the partition actually used
         # on sda e.g. sda1 or sda5
           - name: ardana-vg
               - /dev/sda_root
           # The policy is not to consume 100% of the space of each volume group.
           # 5% should be left free for snapshots and to allow for some flexibility.
              - name: root
                size: 80%
                fstype: ext4
                mount: /
              - name: crash
                size: 15%
                mount: /var/crash
                fstype: ext4
                mkfs-opts: -O large_file
                  name: os

    Servers.yml: The snippet below shows the insertion of an additional server used for hosting the Cloud Lifecycle Manager. Provide the address information here for the server you are running on, that is, the node where you have installed the SUSE OpenStack Cloud ISO.

         # NOTE: Addresses of servers need to be changed to match your environment.
         #       Add additional servers as required
         - id: lifecycle-manager
           ip-addr: YOUR IP ADDRESS HERE
           server-group: RACK1
           nic-mapping: HP-SL230-4PORT
           mac-addr: 8c:dc:d4:b5:c9:e0
           # ipmi information is not needed 
         # Controllers
         - id: controller1
           role: CONTROLLER-ROLE

With the stand-alone input model complete, you are ready to proceed to installing the stand-alone deployer with either the Install UI or the command line.

Print this page