Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
ContentsContents
Deployment Guide using Cloud Lifecycle Manager
  1. I Planning an Installation using Cloud Lifecycle Manager
    1. 1 Registering SLES
    2. 2 Hardware and Software Support Matrix
    3. 3 Recommended Hardware Minimums for the Example Configurations
    4. 4 High Availability
  2. II Cloud Lifecycle Manager Overview
    1. 5 Input Model
    2. 6 Configuration Objects
    3. 7 Other Topics
    4. 8 Configuration Processor Information Files
    5. 9 Example Configurations
    6. 10 Modifying Example Configurations for Compute Nodes
    7. 11 Modifying Example Configurations for Object Storage using Swift
    8. 12 Alternative Configurations
  3. III Pre-Installation
    1. 13 Overview
    2. 14 Pre-Installation Checklist
    3. 15 Installing the Cloud Lifecycle Manager server
    4. 16 Installing and Setting Up an SMT Server on the Cloud Lifecycle Manager server (Optional)
    5. 17 Software Repository Setup
    6. 18 Boot from SAN and Multipath Configuration
  4. IV Cloud Installation
    1. 19 Overview
    2. 20 Preparing for Stand-Alone Deployment
    3. 21 Installing with the Install UI
    4. 22 Using Git for Configuration Management
    5. 23 Installing a Stand-Alone Cloud Lifecycle Manager
    6. 24 Installing Mid-scale and Entry-scale KVM
    7. 25 DNS Service Installation Overview
    8. 26 Magnum Overview
    9. 27 Installing ESX Computes and OVSvAPP
    10. 28 Integrating NSX for vSphere
    11. 29 Installing Baremetal (Ironic)
    12. 30 Installation for SUSE OpenStack Cloud Entry-scale Cloud with Swift Only
    13. 31 Installing SLES Compute
    14. 32 Installing manila and Creating manila Shares
    15. 33 Installing SUSE CaaS Platform heat Templates
    16. 34 Installing SUSE CaaS Platform v4 using terraform
    17. 35 Integrations
    18. 36 Troubleshooting the Installation
    19. 37 Troubleshooting the ESX
  5. V Post-Installation
    1. 38 Post Installation Tasks
    2. 39 UI Verification
    3. 40 Installing OpenStack Clients
    4. 41 Configuring Transport Layer Security (TLS)
    5. 42 Configuring Availability Zones
    6. 43 Configuring Load Balancer as a Service
    7. 44 Other Common Post-Installation Tasks
  6. VI Support
    1. 45 FAQ
    2. 46 Support
    3. 47 Applying PTFs (Program Temporary Fixes) Provided by SUSE L3 Support
    4. 48 Testing PTFs (Program Temporary Fixes) on a Single Node
Navigation
Applies to SUSE OpenStack Cloud 9

6 Configuration Objects Edit source

6.1 Cloud Configuration Edit source

The top-level cloud configuration file, cloudConfig.yml, defines some global values for SUSE OpenStack Cloud, as described in the table below.

The snippet below shows the start of the control plane definition file.

---
  product:
    version: 2

  cloud:
    name: entry-scale-kvm

    hostname-data:
        host-prefix: ardana
        member-prefix: -m

    ntp-servers:
        - "ntp-server1"

    # dns resolving configuration for your site
    dns-settings:
      nameservers:
        - name-server1

    firewall-settings:
        enable: true
        # log dropped packets
        logging: true

    audit-settings:
       audit-dir: /var/audit
       default: disabled
       enabled-services:
         - keystone
KeyValue Description
nameAn administrator-defined name for the cloud
hostname-data (optional)

Provides control over some parts of the generated names (see )

Consists of two values:

  • host-prefix - default is to use the cloud name (above)

  • member-prefix - default is "-m"

ntp-servers (optional)

A list of external NTP servers your cloud has access to. If specified by name then the names need to be resolvable via the external DNS nameservers you specify in the next section. All servers running the "ntp-server" component will be configured to use these external NTP servers.

dns-settings (optional)

DNS configuration data that will be applied to all servers. See example configuration for a full list of values.

smtp-settings (optional)

SMTP client configuration data that will be applied to all servers. See example configurations for a full list of values.

firewall-settings (optional)

Used to enable/disable the firewall feature and to enable/disable logging of dropped packets.

The default is to have the firewall enabled.

audit-settings (optional)

Used to enable/disable the production of audit data from services.

The default is to have audit disabled for all services.

6.2 Control Plane Edit source

The snippet below shows the start of the control plane definition file.

---
  product:
     version: 2

  control-planes:
     - name: control-plane-1
       control-plane-prefix: cp1
       region-name: region0
       failure-zones:
         - AZ1
         - AZ2
         - AZ3
       configuration-data:
         - NEUTRON-CONFIG-CP1
         - OCTAVIA-CONFIG-CP1
       common-service-components:
         - logging-producer
         - monasca-agent
         - stunnel
         - lifecycle-manager-target
       clusters:
         - name: cluster1
           cluster-prefix: c1
           server-role: CONTROLLER-ROLE
           member-count: 3
           allocation-policy: strict
           service-components:
             - lifecycle-manager
             - ntp-server
             - swift-ring-builder
             - mysql
             - ip-cluster
             ...

       resources:
         - name: compute
           resource-prefix: comp
           server-role: COMPUTE-ROLE
           allocation-policy: any
           min-count: 0
           service-components:
              - ntp-client
              - nova-compute
              - nova-compute-kvm
              - neutron-l3-agent
              ...
KeyValue Description
name

This name identifies the control plane. This value is used to persist server allocations Section 7.3, “Persisted Data” and cannot be changed once servers have been allocated.

control-plane-prefix (optional)

The control-plane-prefix is used as part of the hostname (see Section 7.2, “Name Generation”). If not specified, the control plane name is used.

region-name

This name identifies the keystone region within which services in the control plane will be registered. In SUSE OpenStack Cloud, multiple regions are not supported. Only Region0 is valid.

For clouds consisting of multiple control planes, this attribute should be omitted and the regions object should be used to set the region name (Region0).

uses (optional)

Identifies the services this control will consume from other control planes (see Section 6.2.3, “Multiple Control Planes”).

load-balancers (optional)

A list of load balancer definitions for this control plane (see Section 6.2.4, “Load Balancer Definitions in Control Planes”).

For a multi control-plane cloud load balancers must be defined in each control-plane. For a single control-plane cloud they may be defined either in the control plane or as part of a network group.

common-service-components (optional)

This lists a set of service components that run on all servers in the control plane (clusters and resource pools).

failure-zones (optional)

A list of server-group names that servers for this control plane will be allocated from. If no failure-zones are specified, only servers not associated with a server-group will be used. (See Section 5.2.9.1, “Server Groups and Failure Zones” for a description of server-groups as failure zones.)

configuration-data (optional)

A list of configuration data settings to be used for services in this control plane (see Section 5.2.11, “Configuration Data”).

clusters

A list of clusters for this control plane (see Section 6.2.1, “ Clusters”).

resources

A list of resource groups for this control plane (see Section 6.2.2, “Resources”).

6.2.1 Clusters Edit source

KeyValue Description
name

Cluster and resource names must be unique within a control plane. This value is used to persist server allocations (see Section 7.3, “Persisted Data”) and cannot be changed once servers have been allocated.

cluster-prefix (optional)

The cluster prefix is used in the hostname (see Section 7.2, “Name Generation”). If not supplied then the cluster name is used.

server-role

This can either be a string (for a single role) or a list of roles. Only servers matching one of the specified server-roles will be allocated to this cluster. (see Section 5.2.4, “Server Roles” for a description of server roles)

service-components

The list of service-components to be deployed on the servers allocated for the cluster. (The common-service-components for the control plane are also deployed.)

member-count

min-count

max-count

(all optional)

Defines the number of servers to add to the cluster.

The number of servers that can be supported in a cluster depends on the services it is running. For example MariaDB and RabbitMQ can only be deployed on clusters on 1 (non-HA) or 3 (HA) servers. Other services may support different sizes of cluster.

If min-count is specified, then at least that number of servers will be allocated to the cluster. If min-count is not specified it defaults to a value of 1.

If max-count is specified, then the cluster will be limited to that number of servers. If max-count is not specified then all servers matching the required role and failure-zones will be allocated to the cluster.

Specifying member-count is equivalent to specifying min-count and max-count with the same value.

failure-zones (optional)

A list of server-groups that servers will be allocated from. If specified, it overrides the list of values specified for the control-plane. If not specified, the control-plane value is used. (see Section 5.2.9.1, “Server Groups and Failure Zones” for a description of server groups as failure zones).

allocation-policy (optional)

Defines how failure zones will be used when allocating servers.

strict: Server allocations will be distributed across all specified failure zones. (if max-count is not a whole number, an exact multiple of the number of zones, then some zones may provide one more server than other zones)

any: Server allocations will be made from any combination of failure zones.

The default allocation-policy for a cluster is strict.

configuration-data (optional)

A list of configuration-data settings that will be applied to the services in this cluster. The values for each service will be combined with any values defined as part of the configuration-data list for the control-plane. If a value is specified by settings in both lists, the value defined here takes precedence.

6.2.2 Resources Edit source

KeyValue Description
name

The name of this group of resources. Cluster names and resource-node names must be unique within a control plane. Additionally, clusters and resources cannot share names within a control-plane.

This value is used to persist server allocations (see Section 7.3, “Persisted Data”) and cannot be changed once servers have been allocated.

resource-prefix The resource-prefix is used in the name generation. (see Section 7.2, “Name Generation”)
server-role This can either be a string (for a single role) or a list of roles. Only servers matching one of the specified server-roles will be allocated to this resource group. (see Section 5.2.4, “Server Roles” for a description of server roles).
service-components The list of service-components to be deployed on the servers in this resource group. (The common-service-components for the control plane are also deployed.)

member-count

min-count

max-count

(all optional)

Defines the number of servers to add to the cluster.

The number of servers that can be supported in a cluster depends on the services it is running. For example MariaDB and RabbitMQ can only be deployed on clusters on 1 (non-HA) or 3 (HA) servers. Other services may support different sizes of cluster.

If min-count is specified, then at least that number of servers will be allocated to the cluster. If min-count is not specified it defaults to a value of 1.

If max-count is specified, then the cluster will be limited to that number of servers. If max-count is not specified then all servers matching the required role and failure-zones will be allocated to the cluster.

Specifying member-count is equivalent to specifying min-count and max-count with the same value.

failure-zones (optional) A list of server-groups that servers will be allocated from. If specified, it overrides the list of values specified for the control-plane. If not specified, the control-plane value is used. (see Section 5.2.9.1, “Server Groups and Failure Zones” for a description of server groups as failure zones).
allocation-policy (optional)

Defines how failure zones will be used when allocating servers.

strict: Server allocations will be distributed across all specified failure zones. (if max-count is not a whole number, an exact multiple of the number of zones, then some zones may provide one more server than other zones)

any: Server allocations will be made from any combination of failure zones.

The default allocation-policy for resources is any.

configuration-data (optional) A list of configuration-data settings that will be applied to the services in this cluster. The values for each service will be combined with any values defined as part of the configuration-data list for the control-plane. If a value is specified by settings in both lists, the value defined here takes precedence.

6.2.3 Multiple Control Planes Edit source

The dependencies between service components (for example, nova needs MariaDB and keystone API) is defined as part of the service definitions provide by SUSE OpenStack Cloud, the control-planes define how those dependencies will be met. For clouds consisting of multiple control-planes, the relationship between services in different control planes is defined by a uses attribute in its control-plane object. Services will always use other services in the same control-plane before looking to see if the required service can be provided from another control-plane. For example, a service component in control-plane cp-2 (for example, nova-api) might use service components from control-plane cp-shared (for example, keystone-api).

control-planes:
    - name: cp-2
      uses:
        - from: cp-shared
          service-components:
            - any
KeyValue Description
from The name of the control-plane providing services which may be consumed by this control-plane.
service-components A list of service components from the specified control-plane which may be consumed by services in this control-plane. The reserved keyword any indicates that any service component from the specified control-plane may be consumed by services in this control-plane.

6.2.4 Load Balancer Definitions in Control Planes Edit source

Starting in SUSE OpenStack Cloud 9, a load-balancer may be defined within a control-plane object, and referenced by name from a network-groups object. The following example shows load balancer extlb defined in control-plane cp1 and referenced from the EXTERNAL-API network group. See section Load balancers for a complete description of load balance attributes.

network-groups:
    - name: EXTERNAL-API
      load-balancers:
        - extlb

  control-planes:
    - name: cp1
      load-balancers:
        - provider: ip-cluster
          name: extlb
          external-name:
          tls-components:
            - default
          roles:
            - public
          cert-file: cp1-extlb-cert

6.3 Load Balancers Edit source

Load balancers may be defined as part of a network-group object, or as part of a control-plane object. When a load-balancer is defined in a control-plane, it must be referenced by name only from the associated network-group object.

For clouds consisting of multiple control planes, load balancers must be defined as part of a control-plane object. This allows different load balancer configurations for each control plane.

In either case, a load-balancer definition has the following attributes:

load-balancers:
        - provider: ip-cluster
          name: extlb
          external-name:

          tls-components:
            - default
          roles:
            - public
          cert-file: cp1-extlb-cert
KeyValue Description
name An administrator defined name for the load balancer. This name is used to make the association from a network-group.
provider The service component that implements the load balancer. Currently only ip-cluster (ha-proxy) is supported. Future releases will provide support for external load balancers.
roles The list of endpoint roles that this load balancer provides (see below). Valid roles are public, internal, and admin. To ensure separation of concerns, the role public cannot be combined with any other role. See Load Balancers for an example of how the role provides endpoint separation.
components (optional) The list of service-components for which the load balancer provides a non-encrypted virtual IP address for.
tls-components (optional) The list of service-components for which the load balancer provides TLS-terminated virtual IP addresses for.
external-name (optional) The name to be registered in keystone for the publicURL. If not specified, the virtual IP address will be registered. Note that this value cannot be changed after the initial deployment.
cert-file (optional) The name of the certificate file to be used for tls endpoints. If not specified, a file name will be constructed using the format CP-NAME-LB-NAME-cert, where CP-NAME is the control-plane name and LB-NAME is the load-balancer name.

6.4 Regions Edit source

The regions configuration object is used to define how a set of services from one or more control-planes are mapped into Openstack regions (entries within the keystone catalog). In SUSE OpenStack Cloud, multiple regions are not supported. Only Region0 is valid.

Within each region a given service is provided by one control plane, but the set of services in the region may be provided by multiple control planes.

KeyValue Description
nameThe name of the region in the keystone service catalog.
includes A list of services to include in this region, broken down by the control planes providing the services.
KeyValue Description
control-planeA control-plane name.
services A list of service names. This list specifies the services from this control-plane to be included in this region. The reserved keyword all may be used when all services from the control-plane are to be included.

6.5 Servers Edit source

The servers configuration object is used to list the available servers for deploying the cloud.

Optionally, it can be used as an input file to the operating system installation process, in which case some additional fields (identified below) will be necessary.

---
  product:
    version: 2

  baremetal:
    subnet: 192.168.10.0
    netmask: 255.255.255.0

  servers:
    - id: controller1
      ip-addr: 192.168.10.3
      role: CONTROLLER-ROLE
      server-group: RACK1
      nic-mapping: HP-DL360-4PORT
      mac-addr: b2:72:8d:ac:7c:6f
      ilo-ip: 192.168.9.3
      ilo-password: password
      ilo-user: admin

    - id: controller2
      ip-addr: 192.168.10.4
      role: CONTROLLER-ROLE
      server-group: RACK2
      nic-mapping: HP-DL360-4PORT
      mac-addr: 8a:8e:64:55:43:76
      ilo-ip: 192.168.9.4
      ilo-password: password
      ilo-user: admin
KeyValue Description
id An administrator-defined identifier for the server. IDs must be unique and are used to track server allocations. (see Section 7.3, “Persisted Data”).
ip-addr

The IP address is used by the configuration processor to install and configure the service components on this server.

This IP address must be within the range of a network defined in this model.

When the servers file is being used for operating system installation, this IP address will be assigned to the node by the installation process, and the associated network must be an untagged VLAN.

hostname (optional) The value to use for the hostname of the server. If specified this will be used to set the hostname value of the server which will in turn be reflected in systems such as nova, monasca, etc. If not specified the hostname will be derived based on where the server is used and the network defined to provide hostnames.
role Identifies the server-role of the server.
nic-mapping Name of the nic-mappings entry to apply to this server. (See Section 6.12, “NIC Mappings”.)
server-group (optional) Identifies the server-groups entry that this server belongs to. (see Section 5.2.9, “Server Groups”)
boot-from-san (optional) Must be set to true is the server needs to be configured to boot from SAN storage. Default is False
fcoe-interfaces (optional) A list of network devices that will be used for accessing FCoE storage. This is only needed for devices that present as native FCoE, not devices such as Emulex which present as a FC device.
ansible-options (optional) A string of additional variables to be set when defining the server as a host in Ansible. For example, ansible_ssh_port=5986
mac-addr (optional) Needed when the servers file is being used for operating system installation. This identifies the MAC address on the server that will be used to network install the operating system.
kopt-extras (optional) Provides additional command line arguments to be passed to the booting network kernel. For example, vga=769 sets the video mode for the install to low resolution which can be useful for remote console users.
ilo-ip (optional) Needed when the servers file is being used for operating system installation. This provides the IP address of the power management (for example, IPMI, iLO) subsystem.
ilo-user (optional) Needed when the servers file is being used for operating system installation. This provides the user name of the power management (for example, IPMI, iLO) subsystem.
ilo-password (optional) Needed when the servers file is being used for operating system installation. This provides the user password of the power management (for example, IPMI, iLO) subsystem.
ilo-extras (optional) Needed when the servers file is being used for operating system installation. Additional options to pass to ipmitool. For example, this may be required if the servers require additional IPMI addressing parameters.
moonshot (optional) Provides the node identifier for HPE Moonshot servers, for example, c4n1 where c4 is the cartridge and n1 is node.
hypervisor-id (optional) This attribute serves two purposes: it indicates that this server is a virtual machine (VM), and it specifies the server id of the Cloud Lifecycle Manager hypervisor that will host the VM.
ardana-hypervisor (optional) When set to True, this attribute identifies a server as a Cloud Lifecycle Manager hypervisor. A Cloud Lifecycle Manager hypervisor is a server that may be used to host other servers that are themselves virtual machines. Default value is False.

6.6 Server Groups Edit source

The server-groups configuration object provides a mechanism for organizing servers and networks into a hierarchy that can be used for allocation and network resolution.

---
  product:
     version: 2

     - name: CLOUD
        server-groups:
         - AZ1
         - AZ2
         - AZ3
        networks:
         - EXTERNAL-API-NET
         - EXTERNAL-VM-NET
         - GUEST-NET
         - MANAGEMENT-NET

     #
     # Create a group for each failure zone
     #
     - name: AZ1
       server-groups:
         - RACK1

     - name: AZ2
       server-groups:
         - RACK2

     - name: AZ3
       server-groups:
         - RACK3

     #
     # Create a group for each rack
     #
     - name: RACK1
     - name: RACK2
     - name: RACK3
KeyValue Description
name An administrator-defined name for the server group. The name is used to link server-groups together and to identify server-groups to be used as failure zones in a control-plane. (see Section 6.2, “Control Plane”)
server-groups (optional) A list of server-group names that are nested below this group in the hierarchy. Each server group can only be listed in one other server group (that is in a strict tree topology).
networks (optional) A list of network names (see Section 5.2.10.2, “Networks”). See Section 5.2.9.2, “Server Groups and Networks” for a description of how networks are matched to servers via server groups.

6.7 Server Roles Edit source

The server-roles configuration object is a list of the various server roles that you can use in your cloud. Each server role is linked to other configuration objects:

Server roles are referenced in the servers (see Section 6.7, “Server Roles”) configuration object above.

---
  product:
     version: 2

  server-roles:

     - name: CONTROLLER-ROLE
       interface-model: CONTROLLER-INTERFACES
       disk-model: CONTROLLER-DISKS

     - name: COMPUTE-ROLE
       interface-model: COMPUTE-INTERFACES
       disk-model: COMPUTE-DISKS
       memory-model: COMPUTE-MEMORY
       cpu-model: COMPUTE-CPU
KeyValue Description
nameAn administrator-defined name for the role.
interface-model

The name of the interface-model to be used for this server-role.

Different server-roles can use the same interface-model.

disk-model

The name of the disk-model to use for this server-role.

Different server-roles can use the same disk-model.

memory-model (optional)

The name of the memory-model to use for this server-role.

Different server-roles can use the same memory-model.

cpu-model (optional)

The name of the cpu-model to use for this server-role.

Different server-roles can use the same cpu-model.

6.8 Disk Models Edit source

The disk-models configuration object is used to specify how the directly attached disks on the server should be configured. It can also identify which service or service component consumes the disk, for example, swift object server, and provide service-specific information associated with the disk. It is also used to specify disk sizing information for virtual machine servers.

Disks can be used as raw devices or as logical volumes and the disk model provides a configuration item for each.

If the operating system has been installed by the SUSE OpenStack Cloud installation process then the root disk will already have been set up as a volume-group with a single logical-volume. This logical-volume will have been created on a partition identified, symbolically, in the configuration files as /dev/sda_root. This is due to the fact that different BIOS systems (UEFI, Legacy) will result in different partition numbers on the root disk.

---
  product:
     version: 2

  disk-models:
  - name: SES-DISKS

    volume-groups:
       - ...
    device-groups:
       - ...
    vm-size:
       ...
KeyValue Description
name The name of the disk-model that is referenced from one or more server-roles.
volume-groups A list of volume-groups to be configured (see below). There must be at least one volume-group describing the root file system.
device-groups (optional)A list of device-groups (see below)

6.8.1 Volume Groups Edit source

The volume-groups configuration object is used to define volume groups and their constituent logical volumes.

Note that volume-groups are not exact analogs of device-groups. A volume-group specifies a set of physical volumes used to make up a volume-group that is then subdivided into multiple logical volumes.

The SUSE OpenStack Cloud operating system installation automatically creates a volume-group name "ardana-vg" on the first drive in the system. It creates a "root" logical volume there. The volume-group can be expanded by adding more physical-volumes (see examples). In addition, it is possible to create more logical-volumes on this volume-group to provide dedicated capacity for different services or file system mounts.

   volume-groups:
     - name: ardana-vg
       physical-volumes:
         - /dev/sda_root

       logical-volumes:
         - name: root
           size: 35%
           fstype: ext4
           mount: /

         - name: log
           size: 50%
           mount: /var/log
           fstype: ext4
           mkfs-opts: -O large_file

         - ...

     - name: vg-comp
       physical-volumes:
         - /dev/sdb
       logical-volumes:
         - name: compute
           size: 95%
           mount: /var/lib/nova
           fstype: ext4
           mkfs-opts: -O large_file
KeyValue Descriptions
nameThe name that will be assigned to the volume-group
physical-volumes

A list of physical disks that make up the volume group.

As installed by the SUSE OpenStack Cloud operating system install process, the volume group "ardana-vg" will use a large partition (sda_root) on the first disk. This can be expanded by adding additional disk(s).

logical-volumes A list of logical volume devices to create from the above named volume group.
nameThe name to assign to the logical volume.
size The size, expressed as a percentage of the entire volume group capacity, to assign to the logical volume.
fstype (optional) The file system type to create on the logical volume. If none specified, the volume is not formatted.
mkfs-opts (optional) Options, for example, -O large_file to pass to the mkfs command.
mode (optional) The mode changes the root file system mode bits, which can be either a symbolic representation or an octal number representing the bit pattern for the new mode bits.
mount (optional)Mount point for the file system.
consumer attributes (optional, consumer dependent)

These will vary according to the service consuming the device group. The examples section provides sample content for the different services.

Important
Important

Multipath storage should be listed as the corresponding /dev/mapper/mpathX

6.8.2 Device Groups Edit source

The device-groups configuration object provides the mechanism to make the whole of a physical disk available to a service.

KeyValue Descriptions
nameAn administrator-defined name for the device group.
devices

A list of named devices to be assigned to this group. There must be at least one device in the group.

Multipath storage should be listed as the corresponding /dev/mapper/mpathXf

consumer

Identifies the name of one of the storage services (for example, one of the following: swift, cinder, etc.) that will consume the disks in this device group.

consumer attributes

These will vary according to the service consuming the device group. The examples section provides sample content for the different services.

6.9 Memory Models Edit source

The memory-models configuration object describes details of the optional configuration of Huge Pages. It also describes the amount of memory to be allocated for virtual machine servers.

The memory-model allows the number of pages of a particular size to be configured at the server level or at the numa-node level.

The following example would configure:

  • five 2 MB pages in each of numa nodes 0 and 1

  • three 1 GB pages (distributed across all numa nodes)

  • six 2 MB pages (distributed across all numa nodes)

memory-models:
    - name: COMPUTE-MEMORY-NUMA
      default-huge-page-size: 2M
      huge-pages:
        - size: 2M
          count: 5
          numa-node: 0
        - size: 2M
          count: 5
          numa-node: 1
        - size: 1G
          count: 3
        - size: 2M
          count: 6
    - name: VIRTUAL-CONTROLLER-MEMORY
      vm-size:
        ram: 6G
KeyValue Description
nameThe name of the memory-model that is referenced from one or more server-roles.
default-huge-page-size (optional)

The default page size that will be used is specified when allocating huge pages.

If not specified, the default is set by the operating system.

huge-pagesA list of huge page definitions (see below).

6.9.1 Huge Pages Edit source

KeyValue Description
size

The page size in kilobytes, megabytes, or gigabytes specified as nX where:

n

is an integer greater than zero

X

is one of "K", "M" or "G"

countThe number of pages of this size to create (must be greater than zero).
numa-node (optional)

If specified the pages will be created in the memory associated with this numa node.

If not specified the pages are distributed across numa nodes by the operating system.

6.10 CPU Models Edit source

The cpu-models configuration object describes how CPUs are assigned for use by service components such as nova (for VMs) and Open vSwitch (for DPDK), and whether or not those CPUs are isolated from the general kernel SMP balancing and scheduling algorithms. It also describes the number of vCPUs for virtual machine servers.

---
  product:
     version: 2

  cpu-models:
    - name: COMPUTE-CPU
      assignments:
        - components:
            - nova-compute-kvm
          cpu:
            - processor-ids: 0-1,3,5-7
              role: vm
        - components:
            - openvswitch
          cpu:
            - processor-ids: 4,12
              isolate: False
              role: eal
            - processor-ids: 2,10
              role: pmd
    - name: VIRTUAL-CONTROLLER-CPU
      vm-size:
         vcpus: 4

cpu-models

KeyValue Description
nameAn administrator-defined name for the cpu model.
assignmentsA list of CPU assignments .

6.10.1 CPU Assignments Edit source

assignments

KeyValue Description
componentsA list of components to which the CPUs will be assigned.
cpu A list of CPU usage objects (see Section 6.10.2, “CPU Usage” below).

6.10.2 CPU Usage Edit source

cpu

KeyValue Description
processor-idsA list of CPU IDs as seen by the operating system.
isolate (optional)

A Boolean value which indicates if the CPUs are to be isolated from the general kernel SMP balancing and scheduling algorithms. The specified processor IDs will be configured in the Linux kernel isolcpus parameter.

The default value is True.

roleA role within the component for which the CPUs will be used.

6.10.3 Components and Roles in the CPU Model Edit source

ComponentRoleDescription
nova-compute-kvmvm

The specified processor IDs will be configured in the nova vcpu_pin_set option.

openvswitcheal

The specified processor IDs will be configured in the Open vSwitch DPDK EAL -c (coremask) option. Refer to the DPDK documentation for details.

pmd

The specified processor IDs will be configured in the Open vSwitch pmd-cpu-mask option. Refer to the Open vSwitch documentation and the ovs-vswitchd.conf.db man page for details.

6.11 Interface Models Edit source

The interface-models configuration object describes how network interfaces are bonded and the mapping of network groups onto interfaces. Interface devices are identified by name and mapped to a particular physical port by the nic-mapping (see Section 5.2.10.4, “NIC Mapping”).

---
  product:
     version: 2

  interface-models:
     - name: INTERFACE_SET_CONTROLLER
       network-interfaces:
          - name: BONDED_INTERFACE
            device:
              name: bond0
            bond-data:
              provider: linux
              devices:
                - name: hed3
                - name: hed4
              options:
                mode: active-backup
                miimon: 200
                primary: hed3
            network-groups:
               - EXTERNAL_API
               - EXTERNAL_VM
               - GUEST

          - name: UNBONDED_INTERFACE
            device:
               name: hed0
            network-groups:
               - MGMT


       fcoe-interfaces:
          - name: FCOE_DEVICES
            devices:
              - eth7
              - eth8


     - name: INTERFACE_SET_DPDK
       network-interfaces:
          - name: BONDED_DPDK_INTERFACE
            device:
              name: bond0
            bond-data:
              provider: openvswitch
              devices:
                - name: dpdk0
                - name: dpdk1
              options:
                mode: active-backup
            network-groups:
               - GUEST
          - name: UNBONDED_DPDK_INTERFACE
            device:
               name: dpdk2
            network-groups:
               - PHYSNET2
       dpdk-devices:
         - devices:
             - name: dpdk0
             - name: dpdk1
             - name: dpdk2
               driver: igb_uio
           components:
             - openvswitch
           eal-options:
             - name: socket-mem
               value: 1024,0
             - name: n
               value: 2
           component-options:
             - name: n-dpdk-rxqs
               value: 64
KeyValue Description
nameAn administrator-defined name for the interface model.
network-interfacesA list of network interface definitions.
fcoe-interfaces (optional): Section 6.11.2, “fcoe-interfaces”

A list of network interfaces that will be used for Fibre Channel over Ethernet (FCoE). This is only needed for devices that present as a native FCoE device, not cards such as Emulex which present FCoE as a FC device.

dpdk-devices (optional)A list of DPDK device definitions.
Important
Important

The devices must be raw device names, not names controlled via a nic-mapping.

6.11.1 network-interfaces Edit source

The network-interfaces configuration object has the following attributes:

KeyValue Description
nameAn administrator-defined name for the interface
device

A dictionary containing the network device name (as seen on the associated server) and associated properties (see Section 6.11.1.1, “network-interfaces device” for details).

network-groups (optional if forced-network-groups is defined) A list of one or more network-groups (see Section 6.13, “Network Groups”) containing networks (see Section 6.14, “Networks”) that can be accessed via this interface. Networks in these groups will only be configured if there is at least one service-component on the server which matches the list of component-endpoints defined in the network-group.
forced-network-groups (optional if network-groups is defined) A list of one or more network-groups (see Section 6.13, “Network Groups”) containing networks (see Section 6.14, “Networks”) that can be accessed via this interface. Networks in these groups are always configured on the server.
passthrough-network-groups (optional) A list of one or more network-groups (see Section 6.13, “Network Groups”) containing networks (see Section 6.14, “Networks”) that can be accessed by servers running as virtual machines on an Cloud Lifecycle Manager hypervisor server. Networks in these groups are not configured on the Cloud Lifecycle Manager hypervisor server unless they also are specified in the network-groups or forced-network-groups attributes.

6.11.1.1 network-interfaces device Edit source

network-interfaces device

The network-interfaces device configuration object has the following attributes:

KeyValue Description
name

When configuring a bond, this is used as the bond device name - the names of the devices to be bonded are specified in the bond-data section.

If the interface is not bonded, this must be the name of the device specified by the nic-mapping (see NIC Mapping).

vf-count (optional)

Indicates that the interface is to be used for SR-IOV. The value is the number of virtual functions to be created. The associated device specified by the nic-mapping must have a valid nice-device-type.

vf-count cannot be specified on bonded interfaces

Interfaces used for SR-IOV must be associated with a network with tagged-vlan: false.

sriov-only (optional)

Only valid when vf-count is specified. If set to true then the interface is to be used for virtual functions only and the physical function will not be used.

The default value is False.

pci-pt (optional)

If set to true then the interface is used for PCI passthrough.

The default value is False.

6.11.2 fcoe-interfaces Edit source

The fcoe-interfaces configuration object has the following attributes:

KeyValue Description
nameAn administrator-defined name for the group of FCOE interfaces
devices

A list of network devices that will be configured for FCOE

Entries in this must be the name of a device specified by the nic-mapping (see Section 6.12, “NIC Mappings”).

6.11.3 dpdk-devices Edit source

The dpdk-devices configuration object has the following attributes:

KeyValue Descriptions
devices

A list of network devices to be configured for DPDK. See Section 6.11.3.1, “ dpdk-devices devices”.

eal-options

A list of key-value pairs that may be used to set DPDK Environmental Abstraction Layer (EAL) options. Refer to the DPDK documentation for details.

Note that the cpu-model should be used to specify the processor IDs to be used by EAL for this component. The EAL coremask (-c) option will be set automatically based on the information in the cpu-model, and so should not be specified here. See Section 6.10, “ CPU Models”.

component-options

A list of key-value pairs that may be used to set component-specific configuration options.

6.11.3.1 dpdk-devices devices Edit source

The devices configuration object within dpdk-devices has the following attributes:

KeyValue Descriptions
nameThe name of a network device to be used with DPDK. The device names must be the logical-name specified by the nic-mapping (see Section 6.12, “NIC Mappings”).
driver (optional)

Defines the userspace I/O driver to be used for network devices where the native device driver does not provide userspace I/O capabilities.

The default value is igb_uio.

6.11.3.2 DPDK component-options for the openvswitch component Edit source

The following options are supported for use with the openvswitch component:

NameValue Descriptions
n-dpdk-rxqs

Number of rx queues for each DPDK interface. Refer to the Open vSwitch documentation and the ovs-vswitchd.conf.db man page for details.

Note that the cpu-model should be used to define the CPU affinity of the Open vSwitch PMD (Poll Mode Driver) threads. The Open vSwitch pmd-cpu-mask option will be set automatically based on the information in the cpu-model. See Section 6.10, “ CPU Models”.

6.12 NIC Mappings Edit source

The nic-mappings configuration object is used to ensure that the network device name used by the operating system always maps to the same physical device. A nic-mapping is associated to a server in the server definition file. Devices should be named hedN to avoid name clashes with any other devices configured during the operating system install as well as any interfaces that are not being managed by SUSE OpenStack Cloud, ensuring that all devices on a baremetal machine are specified in the file. An excerpt from nic_mappings.yml illustrates:

---
  product:
    version: 2

  nic-mappings:

    - name: HP-DL360-4PORT
      physical-ports:
        - logical-name: hed1
          type: simple-port
          bus-address: "0000:07:00.0"

        - logical-name: hed2
          type: simple-port
          bus-address: "0000:08:00.0"
          nic-device-type: '8086:10fb'

        - logical-name: hed3
          type: multi-port
          bus-address: "0000:09:00.0"
          port-attributes:
              port-num: 0

        - logical-name: hed4
          type: multi-port
          bus-address: "0000:09:00.0"
          port-attributes:
              port-num: 1

Each entry in the nic-mappings list has the following attributes:

KeyValue Description
name An administrator-defined name for the mapping. This name may be used in a server definition (see Section 6.5, “Servers”) to apply the mapping to that server.
physical-portsA list containing device name to address mapping information.

Each entry in the physical-ports list has the following attributes:

KeyValue Description
logical-name The network device name that will be associated with the device at the specified bus-address. The logical-name specified here can be used as a device name in network interface model definitions. (See Section 6.11, “Interface Models”.)
type

The type of port. SUSE OpenStack Cloud 9 supports "simple-port" and "multi-port". Use "simple-port" if your device has a unique bus-address. Use "multi-port" if your hardware requires a "port-num" attribute to identify a single port on a multi-port device. An examples of such a device is:

  • Mellanox Technologies MT26438 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE Virtualization+]

bus-address PCI bus address of the port. Enclose the bus address in quotation marks so yaml does not misinterpret the embedded colon (:) characters. See Chapter 14, Pre-Installation Checklist for details on how to determine this value.
port-attributes (required if type is multi-port) Provides a list of attributes for the physical port. The current implementation supports only one attribute, "port-num". Multi-port devices share a bus-address. Use the "port-num" attribute to identify which physical port on the multi-port device to map. See Chapter 14, Pre-Installation Checklist for details on how to determine this value.
nic-device-type (optional) Specifies the PCI vendor ID and device ID of the port in the format of VENDOR_ID:DEVICE_ID, for example, 8086:10fb.

6.13 Network Groups Edit source

Network-groups define the overall network topology, including where service-components connect, what load balancers are to be deployed, which connections use TLS, and network routing. They also provide the data needed to map neutron's network configuration to the physical networking.

Note
Note

The name of the "MANAGEMENT" network-group cannot be changed. It must be upper case. Every SUSE OpenStack Cloud requires this network group in order to be valid.

---
  product:
     version: 2

  network-groups:

     - name: EXTERNAL-API
       hostname-suffix: extapi

       load-balancers:
         - provider: ip-cluster
           name: extlb
           external-name:

           tls-components:
             - default
           roles:
            - public
           cert-file: my-public-entry-scale-kvm-cert

      - name: EXTERNAL-VM
        tags:
          - neutron.l3_agent.external_network_bridge

      - name: GUEST
        hostname-suffix: guest
        tags:
          - neutron.networks.vxlan

      - name: MANAGEMENT
        hostname-suffix: mgmt
        hostname: true

        component-endpoints:
          - default

        routes:
          - default

        load-balancers:
          - provider: ip-cluster
            name: lb
            components:
              - default
            roles:
              - internal
              - admin

        tags:
          - neutron.networks.vlan:
              provider-physical-network: physnet1
KeyValue Description
name An administrator-defined name for the network group. The name is used to make references from other parts of the input model.
component-endpoints (optional) The list of service-components that will bind to or need direct access to networks in this network-group.
hostname (optional)

If set to true, the name of the address associated with a network in this group will be used to set the hostname of the server.

hostname-suffix (optional) If supplied, this string will be used in the name generation (see Section 7.2, “Name Generation”). If not specified, the name of the network-group will be used.
load-balancers (optional)

A list of load balancers to be configured on networks in this network-group. Because load balances need a virtual IP address, any network group that contains a load balancer can only have one network associated with it.

For clouds consisting of a single control plane, a load balancer may be fully defined within a network-group object. See Load balancer definitions in network groups.

Starting in SUSE OpenStack Cloud 9, a load balancer may be defined within a control-plane object and referenced by name from a network-group object. See Section 6.13.1, “Load Balancer Definitions in Network Groups” in control planes.

routes (optional)

A list of network-groups that networks in this group provide access to via their gateway. This can include the value default to define the default route.

A network group with no services attached to it can be used to define routes to external networks.

The name of a neutron provide network defined via configuration-data (see Section 6.16.2.1, “neutron-provider-networks”) can also be included in this list.

tags (optional)

A list of network tags. Tags provide the linkage between the physical network configuration and the neutron network configuration.

Starting in SUSE OpenStack Cloud 9, network tags may be defined as part of a neutron configuration-data object rather than as part of a network-group object (see Section 6.16.2, “Neutron Configuration Data”).

mtu (optional)

Specifies the MTU value required for networks in this network group If not specified a default value of 1500 is used.

See Section 6.13.3, “MTU (Maximum Transmission Unit)” on how MTU settings are applied to interfaces when there are multiple tagged networks on the same interface.

Important
Important

hostnamemust be set to true for one, and only one, of your network groups.

A load balancer definition has the following attributes:

KeyValue Description
nameAn administrator-defined name for the load balancer.
provider The service component that implements the load balancer. Currently only ip-cluster (ha-proxy) is supported. Future releases will provide support for external load balancers.
roles The list of endpoint roles that this load balancer provides (see below). Valid roles are "public", "internal", and "admin'. To ensure separation of concerns, the role "public" cannot be combined with any other role. See Section 5.2.10.1.1, “Load Balancers” for an example of how the role provides endpoint separation.
components (optional)The list of service-components for which the load balancer provides a non-encrypted virtual IP address for.
tls-components (optional) The list of service-components for which the load balancer provides TLS-terminated virtual IP addresses for. In SUSE OpenStack Cloud, TLS is supported both for internal and public endpoints.
external-name (optional) The name to be registered in keystone for the publicURL. If not specified, the virtual IP address will be registered. Note that this value cannot be changed after the initial deployment.
cert-file (optional) The name of the certificate file to be used for TLS endpoints.

6.13.1 Load Balancer Definitions in Network Groups Edit source

In a cloud consisting of a single control-plane, a load-balancer may be fully defined within a network-groups object as shown in the examples above. See section Section 6.3, “Load Balancers” for a complete description of load balancer attributes.

Starting in SUSE OpenStack Cloud 9, a load-balancer may be defined within a control-plane object in which case the network-group provides just a list of load balancer names as shown below. See section Section 6.3, “Load Balancers” definitions in control planes.

network-groups:

     - name: EXTERNAL-API
       hostname-suffix: extapi

       load-balancers:
         - lb-cp1
         - lb-cp2

The same load balancer name can be used in multiple control-planes to make the above list simpler.

6.13.2 Network Tags Edit source

SUSE OpenStack Cloud supports a small number of network tags which may be used to convey information between the input model and the service components (currently only neutron uses network tags). A network tag consists minimally of a tag name; but some network tags have additional attributes.

Table 6.1: neutron.networks.vxlan
TagValue Description
neutron.networks.vxlanThis tag causes neutron to be configured to use VxLAN as the underlay for tenant networks. The associated network group will carry the VxLAN traffic.
tenant-vxlan-id-range (optional)Used to specify the VxLAN identifier range in the format MIN-ID:MAX-ID. The default range is 1001:65535. Enclose the range in quotation marks. Multiple ranges can be specified as a comma-separated list.

Example using the default ID range:

  tags:
    - neutron.networks.vxlan

Example using a user-defined ID range:

  tags:
    - neutron.networks.vxlan:
        tenant-vxlan-id-range: “1:20000”

Example using multiple user-defined ID range:

  tags:
    - neutron.networks.vxlan:
        tenant-vxlan-id-range: “1:2000,3000:4000,5000:6000”
Table 6.2: neutron.networks.vlan
TagValue Description
neutron.networks.vlan

This tag causes neutron to be configured for provider VLAN networks, and optionally to use VLAN as the underlay for tenant networks. The associated network group will carry the VLAN traffic. This tag can be specified on multiple network groups. However, this tag does not cause any neutron networks to be created, that must be done in neutron after the cloud is deployed.

provider-physical-networkThe provider network name. This is the name to be used in the neutron API for the provider:physical_network parameter of network objects.
tenant-vlan-id-range (optional)This attribute causes neutron to use VLAN for tenant networks; omit this attribute if you are using provider VLANs only. It specifies the VLAN ID range for tenant networks, in the format MIN-ID:MAX-ID. Enclose the range in quotation marks. Multiple ranges can be specified as a comma-separated list.

Example using a provider vlan only (may be used with tenant VxLAN):

  tags:
    - neutron.networks.vlan:
        provider-physical-network: physnet1

Example using a tenant and provider VLAN:

  tags:
    - neutron.networks.vlan:
        provider-physical-network: physnet1
        tenant-vlan-id-range: “30:50,100:200”
Table 6.3: neutron.networks.flat
TagValue Description
neutron.networks.flat

This tag causes neutron to be configured for provider flat networks. The associated network group will carry the traffic. This tag can be specified on multiple network groups. However, this tag does not cause any neutron networks to be created, that must be done in neutron after the cloud is deployed.

provider-physical-networkThe provider network name. This is the name to be used in the neutron API for the provider:physical_network parameter of network objects. When specified on multiple network groups, the name must be unique for each network group.

Example using a provider flat network:

  tags:
    - neutron.networks.flat:
        provider-physical-network: flatnet1
Table 6.4: neutron.l3_agent.external_network_bridge
TagValue Description
neutron.l3_agent.external_network_bridge

This tag causes the neutron L3 Agent to be configured to use the associated network group as the neutron external network for floating IP addresses. A CIDR should not be defined for the associated physical network, as that will cause addresses from that network to be configured in the hypervisor. When this tag is used, provider networks cannot be used as external networks. However, this tag does not cause a neutron external networks to be created, that must be done in neutron after the cloud is deployed.

Example using neutron.l3_agent.external_network_bridge:

  tags:
    - neutron.l3_agent.external_network_bridge

6.13.3 MTU (Maximum Transmission Unit) Edit source

A network group may optionally specify an MTU for its networks to use. Because a network-interface in the interface-model may have a mix of one untagged-vlan network group and one or more tagged-vlan network groups, there are some special requirements when specifying an MTU on a network group.

If the network group consists of untagged-vlan network(s) then its specified MTU must be greater than or equal to the MTU of any tagged-vlan network groups which are co-located on the same network-interface.

For example consider a network group with untagged VLANs, NET-GROUP-1, which is going to share (via a Network Interface definition) a device (eth0) with two network groups with tagged VLANs: NET-GROUP-2 (ID=201, MTU=1550) and NET-GROUP-3 (ID=301, MTU=9000).

The device (eth0) must have an MTU which is large enough to accommodate the VLAN in NET-GROUP-3. Since NET-GROUP-1 has untagged VLANS it will also be using this device and so it must also have an MTU of 9000, which results in the following configuration.

    +eth0 (9000)   <------ this MTU comes from NET-GROUP-1
    | |
    | |----+ vlan201@eth0 (1550)
    \------+ vlan301@eth0 (9000)

Where an interface is used only by network groups with tagged VLANs the MTU of the device or bond will be set to the highest MTU value in those groups.

For example if bond0 is configured to be used by three network groups: NET-GROUP-1 (ID=101, MTU=3000), NET-GROUP-2 (ID=201, MTU=1550) and NET-GROUP-3 (ID=301, MTU=9000).

Then the resulting configuration would be:

    +bond0 (9000)   <------ because of NET-GROUP-3
    | | |
    | | |--+vlan101@bond0 (3000)
    | |----+vlan201@bond0 (1550)
    |------+vlan301@bond0 (9000)

6.14 Networks Edit source

A network definition represents a physical L3 network used by the cloud infrastructure. Note that these are different from the network definitions that are created/configured in neutron, although some of the networks may be used by neutron.

---
   product:
     version: 2

   networks:
     - name: NET_EXTERNAL_VM
       vlanid: 102
       tagged-vlan: true
       network-group: EXTERNAL_VM

     - name: NET_GUEST
       vlanid: 103
       tagged-vlan: true
       cidr: 10.1.1.0/24
       gateway-ip: 10.1.1.1
       network-group: GUEST

     - name: NET_MGMT
       vlanid: 100
       tagged-vlan: false
       cidr: 10.2.1.0/24
       addresses:
       - 10.2.1.10-10.2.1.20
       - 10.2.1.24
       - 10.2.1.30-10.2.1.36
       gateway-ip: 10.2.1.1
       network-group: MGMT
KeyValue Description
name The name of this network. The network name may be used in a server-group definition (see Section 6.6, “Server Groups”) to specify a particular network from within a network-group to be associated with a set of servers.
network-groupThe name of the associated network group.
vlanid (optional) The IEEE 802.1Q VLAN Identifier, a value in the range 1 through 4094. A vlanid must be specified when tagged-vlan is true.
tagged-vlan (optional) May be set to true or false. If true, packets for this network carry the vlanid in the packet header; such packets are referred to as VLAN-tagged frames in IEEE 1Q.
cidr (optional)The IP subnet associated with this network.
addresses (optional)

A list of IP addresses or IP address ranges (specified as START_ADDRESS_RANGE-END_ADDRESS_RANGE from which server addresses may be allocated. The default value is the first host address within the CIDR (for example, the .1 address).

The addresses parameter provides more flexibility than the start-address and end-address parameters and so is the preferred means of specifying this data.

start-address (optional) (deprecated)

An IP address within the CIDR which will be used as the start of the range of IP addresses from which server addresses may be allocated. The default value is the first host address within the CIDR (for example, the .1 address).

end-address (optional) (deprecated)

An IP address within the CIDR which will be used as the end of the range of IP addresses from which server addresses may be allocated. The default value is the last host address within the CIDR (for example, the .254 address of a /24). This parameter is deprecated in favor of the new addresses parameter. This parameter may be removed in a future release.

gateway-ip (optional) The IP address of the gateway for this network. Gateway addresses must be specified if the associated network-group provides routes.

6.15 Firewall Rules Edit source

The configuration processor will automatically generate "allow" firewall rules for each server based on the services deployed and block all other ports. The firewall rules in the input model allow the customer to define additional rules for each network group.

Administrator-defined rules are applied after all rules generated by the Configuration Processor.

---
  product:
     version: 2

  firewall-rules:

     - name: PING
       network-groups:
       - MANAGEMENT
       - GUEST
       - EXTERNAL-API
       rules:
       # open ICMP echo request (ping)
       - type: allow
         remote-ip-prefix:  0.0.0.0/0
         # icmp type
         port-range-min: 8
         # icmp code
         port-range-max: 0
         protocol: icmp
KeyValue Description
nameAn administrator-defined name for the group of rules.
network-groups

A list of network-group names that the rules apply to. A value of "all" matches all network-groups.

rules

A list of rules. Rules are applied in the order in which they appear in the list, apart from the control provided by the "final" option (see above). The order between sets of rules is indeterminate.

6.15.1 Rule Edit source

Each rule in the list takes the following parameters (which match the parameters of a neutron security group rule):

KeyValue Description
typeMust allow
remote-ip-prefix Range of remote addresses in CIDR format that this rule applies to.

port-range-min

port-range-max

Defines the range of ports covered by the rule. Note that if the protocol is icmp then port-range-min is the ICMP type and port-range-max is the ICMP code.
protocol Must be one of tcp, udp, or icmp.

6.16 Configuration Data Edit source

Configuration data allows values to be passed into the model to be used in the context of a specific control plane or cluster. The content and format of the data is service specific.

---
  product:
    version: 2

  configuration-data:
    - name:  NEUTRON-CONFIG-CP1
      services:
        - neutron
      data:
        neutron_provider_networks:
        - name: OCTAVIA-MGMT-NET
          provider:
            - network_type: vlan
              physical_network: physnet1
              segmentation_id: 106
          cidr: 172.30.1.0/24
          no_gateway:  True
          enable_dhcp: True
          allocation_pools:
            - start: 172.30.1.10
              end: 172.30.1.250
          host_routes:
            # route to MANAGEMENT-NET-1
            - destination: 192.168.245.0/24
              nexthop:  172.30.1.1

        neutron_external_networks:
        - name: ext-net
          cidr: 172.31.0.0/24
          gateway: 172.31.0.1
          provider:
            - network_type: vlan
              physical_network: physnet1
              segmentation_id: 107
          allocation_pools:
            - start: 172.31.0.2
              end: 172.31.0.254

      network-tags:
        - network-group: MANAGEMENT
          tags:
            - neutron.networks.vxlan
            - neutron.networks.vlan:
                provider-physical-network: physnet1
        - network-group: EXTERNAL-VM
          tags:
            - neutron.l3_agent.external_network_bridge
KeyValue Description
nameAn administrator-defined name for the set of configuration data.
services

A list of services that the data applies to. Note that these are service names (for example, neutron, octavia, etc.) not service-component names (neutron-server, octavia-api, etc.).

dataA service specific data structure (see below).
network-tags (optional, neutron-only)

A list of network tags. Tags provide the linkage between the physical network configuration and the neutron network configuration.

Starting in SUSE OpenStack Cloud 9, network tags may be defined as part of a neutron configuration-data object rather than as part of a network-group object.

6.16.1 neutron network-tags Edit source

KeyValue Description
network-groupThe name of the network-group with which the tags are associated.
tagsA list of network tags. Tags provide the linkage between the physical network configuration and the neutron network configuration. See section Network Tags.

6.16.2 Neutron Configuration Data Edit source

KeyValue Description
neutron-provider-networksA list of provider networks that will be created in neutron.
neutron-external-networks A list of external networks that will be created in neutron. These networks will have the “router:external” attribute set to True.

6.16.2.1 neutron-provider-networks Edit source

KeyValue Description
name

The name for this network in neutron.

This name must be distinct from the names of any Network Groups in the model to enable it to be included in the “routes” value of a network group.

provider

Details of network to be created

  • network_type

  • physical_network

  • segmentation_id

These values are passed as --provider: options to the openstack network create command

cidr

The CIDR to use for the network. This is passed to the openstack subnet create command.

shared (optional)

A Boolean value that specifies if the network can be shared.

This value is passed to the openstack network create command.

allocation_pools (optional)

A list of start and end address pairs that limit the set of IP addresses that can be allocated for this network.

These values are passed to the openstack subnet create command.

host_routes (optional)

A list of routes to be defined for the network. Each route consists of a destination in cidr format and a nexthop address.

These values are passed to the openstack subnet create command.

gateway_ip (optional)

A gateway address for the network.

This value is passed to the openstack subnet create command.

no_gateway (optional)

A Boolean value indicating that the gateway should not be distributed on this network.

This is translated into the no-gateway option to the openstack subnet create command.

enable_dhcp (optional)

A Boolean value indicating that DHCP should be enabled. The default if not specified is to not enable DHCP.

This value is passed to the openstack subnet create command.

6.16.2.2 neutron-external-networks Edit source

KeyValue Description
name

The name for this network in neutron.

This name must be distinct from the names of any Network Groups in the model to enable it to be included in the “routes” value of a network group.

provider (optional)

The provider attributes are specified when using neutron provider networks as external networks. Provider attributes should not be specified when the external network is configured with the neutron.l3_agent.external_network_bridge.

Standard provider network attributes may be specified:

  • network_type

  • physical_network

  • segmentation_id

These values are passed as --provider: options to the openstack network create command

cidr

The CIDR to use for the network. This is passed to the openstack subnet create command.

allocation_pools (optional)

A list of start and end address pairs that limit the set of IP addresses that can be allocated for this network.

These values are passed to the openstack subnet create command.

gateway (optional)

A gateway address for the network.

This value is passed to the openstack subnet create command.

6.16.3 Octavia Configuration Data Edit source

---
  product:
    version: 2

  configuration-data:
    - name: OCTAVIA-CONFIG-CP1
      services:
        - octavia
      data:
        amp_network_name: OCTAVIA-MGMT-NET
KeyValue Description
amp_network_name The name of the neutron provider network that Octavia will use for management access to load balancers.

6.16.4 Ironic Configuration Data Edit source

---
  product:
    version: 2

  configuration-data:
    - name:  IRONIC-CONFIG-CP1
      services:
        - ironic
      data:
        cleaning_network: guest-network
        enable_node_cleaning: true
        enable_oneview: false

        oneview_manager_url:
        oneview_username:
        oneview_encrypted_password:
        oneview_allow_insecure_connections:
        tls_cacert_file:
        enable_agent_drivers: true

Refer to the documentation on configuring ironic for details of the above attributes.

6.16.5 Swift Configuration Data Edit source

---
  product:
    version: 2

  configuration-data:
  - name: SWIFT-CONFIG-CP1
    services:
      - swift
    data:
      control_plane_rings:
        swift-zones:
          - id: 1
            server-groups:
              - AZ1
          - id: 2
            server-groups:
              - AZ2
          - id: 3
            server-groups:
              - AZ3
        rings:
          - name: account
            display-name: Account Ring
            min-part-hours: 16
            partition-power: 12
            replication-policy:
              replica-count: 3

          - name: container
            display-name: Container Ring
            min-part-hours: 16
            partition-power: 12
            replication-policy:
              replica-count: 3

          - name: object-0
            display-name: General
            default: yes
            min-part-hours: 16
            partition-power: 12
            replication-policy:
              replica-count: 3

Refer to the documentation on Section 11.10, “Understanding Swift Ring Specifications” for details of the above attributes.

6.17 Pass Through Edit source

Through pass_through definitions, certain configuration values can be assigned and used.

product:
  version: 2

pass-through:
  global:
    esx_cloud: true
  servers:
      data:
        vmware:
          cert_check: false
          vcenter_cluster: Cluster1
          vcenter_id: BC9DED4E-1639-481D-B190-2B54A2BF5674
          vcenter_ip: 10.1.200.41
          vcenter_port: 443
          vcenter_username: administrator@vsphere.local
          id: 7d8c415b541ca9ecf9608b35b32261e6c0bf275a
KeyValue Description
globalThese values will be used at the cloud level.
servers These values will be assigned to a specific server(s) using the server-id.
Print this page