Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
ContentsContents
Deployment Guide using Cloud Lifecycle Manager
  1. I Planning an Installation using Cloud Lifecycle Manager
    1. 1 Registering SLES
    2. 2 Hardware and Software Support Matrix
    3. 3 Recommended Hardware Minimums for the Example Configurations
    4. 4 High Availability
  2. II Cloud Lifecycle Manager Overview
    1. 5 Input Model
    2. 6 Configuration Objects
    3. 7 Other Topics
    4. 8 Configuration Processor Information Files
    5. 9 Example Configurations
    6. 10 Modifying Example Configurations for Compute Nodes
    7. 11 Modifying Example Configurations for Object Storage using Swift
    8. 12 Alternative Configurations
  3. III Pre-Installation
    1. 13 Overview
    2. 14 Pre-Installation Checklist
    3. 15 Installing the Cloud Lifecycle Manager server
    4. 16 Installing and Setting Up an SMT Server on the Cloud Lifecycle Manager server (Optional)
    5. 17 Software Repository Setup
    6. 18 Boot from SAN and Multipath Configuration
  4. IV Cloud Installation
    1. 19 Overview
    2. 20 Preparing for Stand-Alone Deployment
    3. 21 Installing with the Install UI
    4. 22 Using Git for Configuration Management
    5. 23 Installing a Stand-Alone Cloud Lifecycle Manager
    6. 24 Installing Mid-scale and Entry-scale KVM
    7. 25 DNS Service Installation Overview
    8. 26 Magnum Overview
    9. 27 Installing ESX Computes and OVSvAPP
    10. 28 Integrating NSX for vSphere
    11. 29 Installing Baremetal (Ironic)
    12. 30 Installation for SUSE OpenStack Cloud Entry-scale Cloud with Swift Only
    13. 31 Installing SLES Compute
    14. 32 Installing manila and Creating manila Shares
    15. 33 Installing SUSE CaaS Platform heat Templates
    16. 34 Installing SUSE CaaS Platform v4 using terraform
    17. 35 Integrations
    18. 36 Troubleshooting the Installation
    19. 37 Troubleshooting the ESX
  5. V Post-Installation
    1. 38 Post Installation Tasks
    2. 39 UI Verification
    3. 40 Installing OpenStack Clients
    4. 41 Configuring Transport Layer Security (TLS)
    5. 42 Configuring Availability Zones
    6. 43 Configuring Load Balancer as a Service
    7. 44 Other Common Post-Installation Tasks
  6. VI Support
    1. 45 FAQ
    2. 46 Support
    3. 47 Applying PTFs (Program Temporary Fixes) Provided by SUSE L3 Support
    4. 48 Testing PTFs (Program Temporary Fixes) on a Single Node
Navigation
Applies to SUSE OpenStack Cloud 9

7 Other Topics Edit source

7.1 Services and Service Components Edit source

TypeServiceService Components
Compute
Virtual Machine Provisioningnova
nova-api
nova-compute
nova-compute-hyperv
nova-compute-ironic
nova-compute-kvm
nova-conductor
nova-console-auth
nova-esx-compute-proxy
nova-metadata
nova-novncproxy
nova-scheduler
nova-scheduler-ironic
nova-placement-api
Bare Metal Provisioningironic
ironic-api
ironic-conductor
Networking
Networkingneutron
infoblox-ipam-agent
neutron-dhcp-agent
neutron-l2gateway-agent
neutron-l3-agent
neutron-metadata-agent
neutron-ml2-plugin
neutron-openvswitch-agent
neutron-ovsvapp-agent
neutron-server
neutron-sriov-nic-agent
neutron-vpn-agent
Network Load Balanceroctavia
octavia-api
octavia-health-manager
Domain Name Service (DNS)designate
designate-api
designate-central
designate-mdns
designate-mdns-external
designate-pool-manager
designate-zone-manager
Storage
Block Storagecinder
cinder-api
cinder-backup
cinder-scheduler
cinder-volume
Object Storageswift
swift-account
swift-common
swift-container
swift-object
swift-proxy
swift-ring-builder
swift-rsync
Image
Image Managementglance
glance-api
glance-registry
Security
Key Managementbarbican
barbican-api
barbican-worker
Identity and Authenticationkeystone
keystone-api
Orchestration
Orchestrationheat
heat-api
heat-api-cfn
heat-api-cloudwatch
heat-engine
Operations
Telemetryceilometer
ceilometer-agent-notification
ceilometer-common
ceilometer-polling
Cloud Lifecycle Managerardana
ardana-ux-services
lifecycle-manager
lifecycle-manager-target
Dashboardhorizon
horizon
Centralized Logginglogging
logging-api
logging-producer
logging-rotate
logging-server
Monitoringmonasca
monasca-agent
monasca-api
monasca-dashboard
monasca-liveness-check
monasca-notifier
monasca-persister
monasca-threshold
monasca-transform
Operations Consoleoperations
ops-console-web
Openstack Functional Test Suitetempest
tempest
Foundation
OpenStack Clientsclients
barbican-client
cinder-client
designate-client
glance-client
heat-client
ironic-client
keystone-client
monasca-client
neutron-client
nova-client
openstack-client
swift-client
Supporting Servicesfoundation
apache2
bind
bind-ext
influxdb
ip-cluster
kafka
memcached
mysql
ntp-client
ntp-server
openvswitch
rabbitmq
spark
storm
cassandra
zookeeper

7.2 Name Generation Edit source

Names are generated by the configuration processor for all allocated IP addresses. A server connected to multiple networks will have multiple names associated with it. One of these may be assigned as the hostname for a server via the network-group configuration (see Section 6.12, “NIC Mappings”). Names are generated from data taken from various parts of the input model as described in the following sections.

ClustersEdit source

Names generated for servers in a cluster have the following form:

CLOUD-CONTROL-PLANE-CLUSTERMEMBER-PREFIXMEMBER_ID-NETWORK

Example: ardana-cp1-core-m1-mgmt

NameDescription
CLOUD Comes from the hostname-data section of the cloud object (see Section 6.1, “Cloud Configuration”)
CONTROL-PLANE is the control-plane prefix or name (see Section 6.2, “Control Plane”)
CLUSTER is the cluster-prefix name (see Section 6.2.1, “ Clusters”)
member-prefix comes from the hostname-data section of the cloud object (see Section 6.1, “Cloud Configuration”)
member_id is the ordinal within the cluster, generated by the configuration processor as servers are allocated to the cluster
network comes from the hostname-suffix of the network group to which the network belongs (see Section 6.12, “NIC Mappings”).
Resource NodesEdit source

Names generated for servers in a resource group have the following form:

CLOUD-CONTROL-PLANE-RESOURCE-PREFIXMEMBER_ID-NETWORK

Example: ardana-cp1-comp0001-mgmt

NameDescription
CLOUD comes from the hostname-data section of the cloud object (see Section 6.1, “Cloud Configuration”).
CONTROL-PLANE is the control-plane prefix or name (see Section 6.2, “Control Plane”).
RESOURCE-PREFIX is the resource-prefix value name (see Section 6.2.2, “Resources”).
MEMBER_ID is the ordinal within the cluster, generated by the configuration processor as servers are allocated to the cluster, padded with leading zeroes to four digits.
NETWORK comes from the hostname-suffix of the network group to which the network belongs to (see Section 6.12, “NIC Mappings”)

7.3 Persisted Data Edit source

The configuration processor makes allocation decisions on servers and IP addresses which it needs to remember between successive runs so that if new servers are added to the input model they do not disrupt the previously deployed allocations.

To allow users to make multiple iterations of the input model before deployment SUSE OpenStack Cloud will only persist data when the administrator confirms that they are about to deploy the results via the "ready-deployment" operation. To understand this better, consider the following example:

Imagine you have completed your SUSE OpenStack Cloud deployment with servers A, B, and C and you want to add two new compute nodes by adding servers D and E to the input model.

When you add these to the input model and re-run the configuration processor it will read the persisted data for A, B, and C and allocate D and E as new servers. The configuration processor now has allocation data for A, B, C, D, and E -- which it keeps in a staging area (actually a special branch in Git) until we get confirmation that the configuration processor has done what you intended and you are ready to deploy the revised configuration.

If you notice that the role of E is wrong and it became a swift node instead of a nova node you need to be able to change the input model and re-run the configuration processor. This is fine because the allocations of D and E have not been confirmed, and so the configuration processor will re-read the data about A, B, C and re-allocate D and E now to the correct clusters, updating the persisted data in the staging area.

You can loop though this as many times as needed. Each time, the configuration processor is processing the deltas to what is deployed, not the results of the previous run. When you are ready to use the results of the configuration processor, you run ready-deployment.yml which commits the data in the staging area into the persisted data. The next run of the configuration processor will then start from the persisted data for A, B, C, D, and E.

7.3.1 Persisted Server Allocations Edit source

Server allocations are persisted by the administrator-defined server ID (see Section 6.5, “Servers”), and include the control plane, cluster/resource name, and ordinal within the cluster or resource group.

To guard against data loss, the configuration processor persists server allocations even when the server ID no longer exists in the input model -- for example, if a server was removed accidentally and the configuration processor allocated a new server to the same ordinal, then it would be very difficult to recover from that situation.

The following example illustrates the behavior:

A cloud is deployed with four servers with IDs of A, B, C, and D that can all be used in a resource group with min-size=0 and max-size=3. At the end of this deployment they persisted state is as follows:

IDControl PlaneResource GroupOrdinalStateDeployed As
Accpcompute1Allocatedmycloud-ccp-comp0001
Bccpcompute2Allocatedmycloud-ccp-comp0002
Cccpcompute3Allocatedmycloud-ccp-comp0003
D   Available 

(In this example server D has not been allocated because the group is at its max size, and there are no other groups that required this server)

If server B is removed from the input model and the configuration processor is re-run, the state is changed to:

IDControl PlaneResource GroupOrdinalStateDeployed As
Accpcompute1Allocatedmycloud-ccp-comp0001
Bccpcompute2Deleted 
Cccpcompute3Allocatedmycloud-ccp-comp0003
Dccpcompute4Allocatedmycloud-ccp-comp0004

The details associated with server B are still retained, but the configuration processor will not generate any deployment data for this server. Server D has been added to the group to meet the minimum size requirement but has been given a different ordinal and hence will get different names and IP addresses than were given to server B.

If server B is added back into the input model the resulting state will be:

IDControl PlaneResource GroupOrdinalStateDeployed As
Accpcompute1Allocatedmycloud-ccp-comp0001
Bccpcompute2Deleted 
Cccpcompute3Allocatedmycloud-ccp-comp0003
Dccpcompute4Allocatedmycloud-ccp-comp0004

The configuration processor will issue a warning that server B cannot be returned to the compute group because it would exceed the max-size constraint. However, because the configuration processor knows that server B is associated with this group it will not allocate it to any other group that could use it, since that might lead to data loss on that server.

If the max-size value of the group was increased, then server B would be allocated back to the group, with its previous name and addresses (mycloud-cp1-compute0002).

Note that the configuration processor relies on the server ID to identify a physical server. If the ID value of a server is changed the configuration processor will treat it as a new server. Conversely, if a different physical server is added with the same ID as a deleted server the configuration processor will assume that it is the original server being returned to the model.

You can force the removal of persisted data for servers that are no longer in the input model by running the configuration processor with the remove_deleted_servers option, like below:

ardana > cd ~/openstack/ardana/ansible
ardana > ansible-playbook -i hosts/localhost config-processor-run.yml \
-e remove_deleted_servers="y"

7.3.2 Persisted Address Allocations Edit source

The configuration processor persists IP address allocations by the generated name (see Section 7.2, “Name Generation” for how names are generated). As with servers. once an address has been allocated that address will remain allocated until the configuration processor is explicitly told that it is no longer required. The configuration processor will generate warnings for addresses that are persisted but no longer used.

You can remove persisted address allocations that are no longer used in the input model by running the configuration processor with the free_unused_addresses option, like below:

ardana > cd ~/openstack/ardana/ansible
ardana > ansible-playbook -i hosts/localhost config-processor-run.yml \
-e free_unused_addresses="y"

7.4 Server Allocation Edit source

The configuration processor allocates servers to a cluster or resource group in the following sequence:

  1. Any servers that are persisted with a state of "allocated" are first returned to the cluster or resource group. Such servers are always allocated even if this contradicts the cluster size, failure-zones, or list of server roles since it is assumed that these servers are actively deployed.

  2. If the cluster or resource group is still below its minimum size, then any servers that are persisted with a state of "deleted", but where the server is now listed in the input model (that is, the server was removed but is now back), are added to the group providing they meet the failure-zone and server-role criteria. If they do not meet the criteria then a warning is given and the server remains in a deleted state (that is, it is still not allocated to any other cluster or group). These servers are not part of the current deployment, and so you must resolve any conflicts before they can be redeployed.

  3. If the cluster or resource group is still below its minimum size, the configuration processor will allocate additional servers that meet the failure-zone and server-role criteria. If the allocation policy is set to "strict" then the failure zones of servers already in the cluster or resource group are not considered until an equal number of servers has been allocated from each zone.

7.5 Server Network Selection Edit source

Once the configuration processor has allocated a server to a cluster or resource group it uses the information in the associated interface-model to determine which networks need to be configured. It does this by:

  1. Looking at the service-components that are to run on the server (from the control-plane definition)

  2. Looking to see which network-group each of those components is attached to (from the network-groups definition)

  3. Looking to see if there are any network-tags related to a service-component running on this server, and if so, adding those network-groups to the list (also from the network-groups definition)

  4. Looking to see if there are any network-groups that the interface-model says should be forced onto the server

  5. It then searches the server-group hierarchy (as described in Section 5.2.9.2, “Server Groups and Networks”) to find a network in each of the network-groups it needs to attach to

If there is no network available to a server, either because the interface-model does not include the required network-group, or there is no network from that group in the appropriate part of the server-groups hierarchy, then the configuration processor will generate an error.

The configuration processor will also generate an error if the server address does not match any of the networks it will be connected to.

7.6 Network Route Validation Edit source

Once the configuration processor has allocated all of the required servers and matched them to the appropriate networks, it validates that all service-components have the required network routes to other service-components.

It does this by using the data in the services section of the input model which provides details of which service-components need to connect to each other. This data is not configurable by the administrator; however, it is provided as part of the SUSE OpenStack Cloud release.

For each server, the configuration processor looks at the list of service-components it runs and determines the network addresses of every other service-component it needs to connect to (depending on the service, this might be a virtual IP address on a load balancer or a set of addresses for the service).

If the target address is on a network that this server is connected to, then there is no routing required. If the target address is on a different network, then the Configuration Processor looks at each network the server is connected to and looks at the routes defined in the corresponding network-group. If the network-group provides a route to the network-group of the target address, then that route is considered valid.

Networks within the same network-group are always considered as routed to each other; networks from different network-groups must have an explicit entry in the routes stanza of the network-group definition. Routes to a named network-group are always considered before a "default" route.

A warning is given for any routes which are using the "default" route since it is possible that the user did not intend to route this traffic. Such warning can be removed by adding the appropriate network-group to the list of routes.

The configuration processor provides details of all routes between networks that it is expecting to be configured in the info/route_info.yml file.

To illustrate how network routing is defined in the input model, consider the following example:

A compute server is configured to run nova-compute which requires access to the neutron API servers and a block storage service. The neutron API servers have a virtual IP address provided by a load balancer in the INTERNAL-API network-group and the storage service is connected to the ISCSI network-group. nova-compute itself is part of the set of components attached by default to the MANAGEMENT network-group. The intention is to have virtual machines on the compute server connect to the block storage via the ISCSI network.

The physical network is shown below:

The corresponding entries in the network-groups are:

  - name: INTERNAL-API
    hostname-suffix: intapi

    load-balancers:
       - provider: ip-cluster
         name: lb
         components:
           - default
         roles:
           - internal
           - admin

       - name: MANAGEMENT
         hostname-suffix: mgmt
         hostname: true

         component-endpoints:
           - default

         routes:
           - INTERNAL-API
           - default

       - name: ISCSI
         hostname-suffix: iscsi

         component-endpoints:
            - storage service

And the interface-model for the compute server looks like this:

  - name: INTERFACE_SET_COMPUTE
    network-interfaces:
      - name: BOND0
        device:
           name: bond0
        bond-data:
           options:
              mode: active-backup
              miimon: 200
              primary: hed5
           provider: linux
           devices:
              - name: hed4
              - name: hed5
        network-groups:
          - MANAGEMENT
          - ISCSI

When validating the route from nova-compute to the neutron API, the configuration processor will detect that the target address is on a network in the INTERNAL-API network group, and that the MANAGEMENT network (which is connected to the compute server) provides a route to this network, and thus considers this route valid.

When validating the route from nova-compute to a storage service, the configuration processor will detect that the target address is on a network in the ISCSInetwork group. However, because there is no service component on the compute server connected to the ISCSI network (according to the network-group definition) the ISCSI network will not have been configured on the compute server (see Section 7.5, “Server Network Selection”. The configuration processor will detect that the MANAGEMENT network-group provides a "default" route and thus considers the route as valid (it is, of course, valid to route ISCSI traffic). However, because this is using the default route, a warning will be issued:

#   route-generator-2.0       WRN: Default routing used between networks
The following networks are using a 'default' route rule. To remove this warning
either add an explicit route in the source network group or force the network to
attach in the interface model used by the servers.
  MANAGEMENT-NET-RACK1 to ISCSI-NET
    ardana-ccp-comp0001
  MANAGEMENT-NET-RACK 2 to ISCSI-NET
    ardana-ccp-comp0002
  MANAGEMENT-NET-RACK 3 to SCSI-NET
    ardana-ccp-comp0003

To remove this warning, you can either add ISCSI to the list of routes in the MANAGEMENT network group (routed ISCSI traffic is still a valid configuration) or force the compute server to attach to the ISCSI network-group by adding it as a forced-network-group in the interface-model, like this:

  - name: INTERFACE_SET_COMPUTE
      network-interfaces:
        - name: BOND0
          device:
            name: bond0
          bond-data:
            options:
               mode: active-backup
               miimon: 200
               primary: hed5
            provider: linux
            devices:
               - name: hed4
               - name: hed5
          network-groups:
             - MANAGEMENT
          forced-network-groups:
             - ISCSI

With the attachment to the ISCSI network group forced, the configuration processor will attach the compute server to a network in that group and validate the route as either being direct or between networks in the same network-group.

The generated route_info.yml file will include entries such as the following, showing the routes that are still expected to be configured between networks in the MANAGEMENT network group and the INTERNAL-API network group.

  MANAGEMENT-NET-RACK1:
     INTERNAL-API-NET:
        default: false
        used_by:
           nova-compute:
              neutron-server:
              - ardana-ccp-comp0001
   MANAGEMENT-NET-RACK2:
     INTERNAL-API-NET:
        default: false
        used_by:
          nova-compute:
            neutron-server:
            - ardana-ccp-comp0003

7.7 Configuring neutron Provider VLANs Edit source

neutron provider VLANs are networks that map directly to an 802.1Q VLAN in the cloud provider’s physical network infrastructure. There are four aspects to a provider VLAN configuration:

  • Network infrastructure configuration (for example, the top-of-rack switch)

  • Server networking configuration (for compute nodes and neutron network nodes)

  • neutron configuration file settings

  • Creation of the corresponding network objects in neutron

The physical network infrastructure must be configured to convey the provider VLAN traffic as tagged VLANs to the cloud compute nodes and neutron network nodes. Configuration of the physical network infrastructure is outside the scope of the SUSE OpenStack Cloud 9 software.

SUSE OpenStack Cloud 9 automates the server networking configuration and the neutron configuration based on information in the cloud definition. To configure the system for provider VLANs, specify the neutron.networks.vlan tag with a provider-physical-network attribute on one or more network-groups as described in Section 6.13.2, “Network Tags”. For example (some attributes omitted for brevity):

  network-groups:

    - name: NET_GROUP_A
      tags:
        - neutron.networks.vlan:
              provider-physical-network: physnet1

    - name: NET_GROUP_B
      tags:
        - neutron.networks.vlan:
              provider-physical-network: physnet2

A network-group is associated with a server network interface via an interface-model as described in Section 6.11, “Interface Models”. For example (some attributes omitted for brevity):

  interface-models:
     - name: INTERFACE_SET_X
       network-interfaces:
        - device:
              name: bond0
          network-groups:
            - NET_GROUP_A
        - device:
              name: hed3
          network-groups:
            - NET_GROUP_B

A network-group used for provider VLANs may contain only a single SUSE OpenStack Cloud network, because that VLAN must span all compute nodes and any neutron network nodes/controllers (that is, it is a single L2 segment). The SUSE OpenStack Cloud network must be defined with tagged-vlan: false, otherwise a Linux VLAN network interface will be created. For example:

  networks:
     - name: NET_A
       tagged-vlan: false
       network-group: NET_GROUP_A
     - name: NET_B
       tagged-vlan: false
       network-group: NET_GROUP_B

When the cloud is deployed, SUSE OpenStack Cloud 9 will create the appropriate bridges on the servers, and set the appropriate attributes in the neutron configuration files (for example, bridge_mappings).

After the cloud has been deployed, create neutron network objects for each provider VLAN using the OpenStackClient CLI:

tux > sudo openstack network create --provider:network_type vlan \
--provider:physical_network PHYSNET1 --provider:segmentation_id 101 MYNET101
tux > sudo openstack network create --provider:network_type vlan \
--provider:physical_network PHYSNET2 --provider:segmentation_id 234 MYNET234

7.8 Standalone Cloud Lifecycle Manager Edit source

All the example configurations use a deployer-in-the-cloud scenario where the first controller is also the deployer/Cloud Lifecycle Manager. If you want to use a standalone Cloud Lifecycle Manager, you need to add the relevant details in control_plane.yml, servers.yml and related configuration files. Detailed instructions are available at Section 12.1, “Using a Dedicated Cloud Lifecycle Manager Node”.

Print this page