Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE OpenStack Cloud 8

5 Input Model Edit source

5.1 Introduction to the Input Model Edit source

This document describes how SUSE OpenStack Cloud input models can be used to define and configure the cloud.

SUSE OpenStack Cloud ships with a set of example input models that can be used as starting points for defining a custom cloud. An input model allows you, the cloud administrator, to describe the cloud configuration in terms of:

  • Which OpenStack services run on which server nodes

  • How individual servers are configured in terms of disk and network adapters

  • The overall network configuration of the cloud

  • Network traffic separation

  • CIDR and VLAN assignments

The input model is consumed by the configuration processor which parses and validates the input model and outputs the effective configuration that will be deployed to each server that makes up your cloud.

The document is structured as follows:

  • Concepts - This explains the ideas behind the declarative model approach used in SUSE OpenStack Cloud 8 and the core concepts used in describing that model

  • Input Model - This section provides a description of each of the configuration entities in the input model

  • Core Examples - In this section we provide samples and definitions of some of the more important configuration entities

5.2 Concepts Edit source

An SUSE OpenStack Cloud 8 cloud is defined by a declarative model that is described in a series of configuration objects. These configuration objects are represented in YAML files which together constitute the various example configurations provided as templates with this release. These examples can be used nearly unchanged, with the exception of necessary changes to IP addresses and other site and hardware-specific identifiers. Alternatively, the examples may be customized to meet site requirements.

The following diagram shows the set of configuration objects and their relationships. All objects have a name that you may set to be something meaningful for your context. In the examples these names are provided in capital letters as a convention. These names have no significance to SUSE OpenStack Cloud, rather it is the relationships between them that define the configuration.

The configuration processor reads and validates the input model described in the YAML files discussed above, combines it with the service definitions provided by SUSE OpenStack Cloud and any persisted state information about the current deployment to produce a set of Ansible variables that can be used to deploy the cloud. It also produces a set of information files that provide details about the configuration.

The relationship between the file systems on the SUSE OpenStack Cloud deployment server and the configuration processor is shown in the following diagram. Below the line are the directories that you, the cloud administrator, edit to declare the cloud configuration. Above the line are the directories that are internal to the Cloud Lifecycle Manager such as Ansible playbooks and variables.

The input model is read from the ~/openstack/my_cloud/definition directory. Although the supplied examples use separate files for each type of object in the model, the names and layout of the files have no significance to the configuration processor, it simply reads all of the .yml files in this directory. Cloud administrators are therefore free to use whatever structure is best for their context. For example, you may decide to maintain separate files or sub-directories for each physical rack of servers.

As mentioned, the examples use the conventional upper casing for object names, but these strings are used only to define the relationship between objects. They have no specific significance to the configuration processor.

5.2.1 Cloud Edit source

The Cloud definition includes a few top-level configuration values such as the name of the cloud, the host prefix, details of external services (NTP, DNS, SMTP) and the firewall settings.

The location of the cloud configuration file also tells the configuration processor where to look for the files that define all of the other objects in the input model.

5.2.2 Control Planes Edit source

A control-plane runs one or more services distributed across clusters and resource groups.

A control-plane uses servers with a particular server-role.

A control-plane provides the operating environment for a set of services; normally consisting of a set of shared services (MariaDB, RabbitMQ, HA Proxy, Apache, etc.), OpenStack control services (API, schedulers, etc.) and the resources they are managing (compute, storage, etc.).

A simple cloud may have a single control-plane which runs all of the services. A more complex cloud may have multiple control-planes to allow for more than one instance of some services. Services that need to consume (use) another service (such as Neutron consuming MariaDB, Nova consuming Neutron) always use the service within the same control-plane. In addition a control-plane can describe which services can be consumed from other control-planes. It is one of the functions of the configuration processor to resolve these relationships and make sure that each consumer/service is provided with the configuration details to connect to the appropriate provider/service.

Each control-plane is structured as clusters and resources. The clusters are typically used to host the OpenStack services that manage the cloud such as API servers, database servers, Neutron agents, and Swift proxies, while the resources are used to host the scale-out OpenStack services such as Nova-Compute or Swift-Object services. This is a representation convenience rather than a strict rule, for example it is possible to run the Swift-Object service in the management cluster in a smaller-scale cloud that is not designed for scale-out object serving.

A cluster can contain one or more servers and you can have one or more clusters depending on the capacity and scalability needs of the cloud that you are building. Spreading services across multiple clusters provides greater scalability, but it requires a greater number of physical servers. A common pattern for a large cloud is to run high data volume services such as monitoring and logging in a separate cluster. A cloud with a high object storage requirement will typically also run the Swift service in its own cluster.

Clusters in this context are a mechanism for grouping service components in physical servers, but all instances of a component in a control-plane work collectively. For example, if HA Proxy is configured to run on multiple clusters within the same control-plane then all of those instances will work as a single instance of the ha-proxy service.

Both clusters and resources define the type (via a list of server-roles) and number of servers (min and max or count) they require.

The control-plane can also define a list of failure-zones (server-groups) from which to allocate servers.

5.2.2.1 Control Planes and Regions Edit source

A region in OpenStack terms is a collection of URLs that together provide a consistent set of services (Nova, Neutron, Swift, etc). Regions are represented in the Keystone identity service catalog. In SUSE OpenStack Cloud, multiple regions are not supported. Only Region0 is valid.

In a simple single control-plane cloud, there is no need for a separate region definition and the control-plane itself can define the region name.

5.2.3 Services Edit source

A control-plane runs one or more services.

A service is the collection of service-components that provide a particular feature; for example, Nova provides the compute service and consists of the following service-components: nova-api, nova-scheduler, nova-conductor, nova-novncproxy, and nova-compute. Some services, like the authentication/identity service Keystone, only consist of a single service-component.

To define your cloud, all you need to know about a service are the names of the service-components. The details of the services themselves and how they interact with each other is captured in service definition files provided by SUSE OpenStack Cloud.

When specifying your SUSE OpenStack Cloud cloud you have to decide where components will run and how they connect to the networks. For example, should they all run in one control-plane sharing common services or be distributed across multiple control-planes to provide separate instances of some services? The SUSE OpenStack Cloud supplied examples provide solutions for some typical configurations.

Where services run is defined in the control-plane. How they connect to networks is defined in the network-groups.

5.2.4 Server Roles Edit source

Clusters and resources use servers with a particular set of server-roles.

You are going to be running the services on physical servers, and you are going to need a way to specify which type of servers you want to use where. This is defined via the server-role. Each server-role describes how to configure the physical aspects of a server to fulfill the needs of a particular role. You will generally use a different role whenever the servers are physically different (have different disks or network interfaces) or if you want to use some specific servers in a particular role (for example to choose which of a set of identical servers are to be used in the control plane).

Each server-role has a relationship to four other entities:

  • The disk-model specifies how to configure and use a server's local storage and it specifies disk sizing information for virtual machine servers. The disk model is described in the next section.

  • The interface-model describes how a server's network interfaces are to be configured and used. This is covered in more details in the networking section.

  • An optional memory-model specifies how to configure and use huge pages. The memory-model specifies memory sizing information for virtual machine servers.

  • An optional cpu-model specifies how the CPUs will be used by Nova and by DPDK. The cpu-model specifies CPU sizing information for virtual machine servers.

5.2.5 Disk Model Edit source

Each physical disk device is associated with a device-group or a volume-group.

Device-groups are consumed by services.

Volume-groups are divided into logical-volumes.

Logical-volumes are mounted as file systems or consumed by services.

Disk-models define how local storage is to be configured and presented to services. Disk-models are identified by a name, which you will specify. The SUSE OpenStack Cloud examples provide some typical configurations. As this is an area that varies with respect to the services that are hosted on a server and the number of disks available, it is impossible to cover all possible permutations you may need to express via modifications to the examples.

Within a disk-model, disk devices are assigned to either a device-group or a volume-group.

A device-group is a set of one or more disks that are to be consumed directly by a service. For example, a set of disks to be used by Swift. The device-group identifies the list of disk devices, the service, and a few service-specific attributes that tell the service about the intended use (for example, in the case of Swift this is the ring names). When a device is assigned to a device-group, the associated service is responsible for the management of the disks. This management includes the creation and mounting of file systems. (Swift can provide additional data integrity when it has full control over the file systems and mount points.)

A volume-group is used to present disk devices in a LVM volume group. It also contains details of the logical volumes to be created including the file system type and mount point. Logical volume sizes are expressed as a percentage of the total capacity of the volume group. A logical-volume can also be consumed by a service in the same way as a device-group. This allows services to manage their own devices on configurations that have limited numbers of disk drives.

Disk models also provide disk sizing information for virtual machine servers.

5.2.6 Memory Model Edit source

Memory models define how the memory of a server should be configured to meet the needs of a particular role. It allows a number of HugePages to be defined at both the server and numa-node level.

Memory models also provide memory sizing information for virtual machine servers.

Memory models are optional - it is valid to have a server role without a memory model.

5.2.7 CPU Model Edit source

CPU models define how CPUs of a server will be used. The model allows CPUs to be assigned for use by components such as Nova (for VMs) and Open vSwitch (for DPDK). It also allows those CPUs to be isolated from the general kernel SMP balancing and scheduling algorithms.

CPU models also provide CPU sizing information for virtual machine servers.

CPU models are optional - it is valid to have a server role without a cpu model.

5.2.8 Servers Edit source

Servers have a server-role which determines how they will be used in the cloud.

Servers (in the input model) enumerate the resources available for your cloud. In addition, in this definition file you can either provide SUSE OpenStack Cloud with all of the details it needs to PXE boot and install an operating system onto the server, or, if you prefer to use your own operating system installation tooling you can simply provide the details needed to be able to SSH into the servers and start the deployment.

The address specified for the server will be the one used by SUSE OpenStack Cloud for lifecycle management and must be part of a network which is in the input model. If you are using SUSE OpenStack Cloud to install the operating system this network must be an untagged VLAN. The first server must be installed manually from the SUSE OpenStack Cloud ISO and this server must be included in the input model as well.

In addition to the network details used to install or connect to the server, each server defines what its server-role is and to which server-group it belongs.

5.2.9 Server Groups Edit source

A server is associated with a server-group.

A control-plane can use server-groups as failure zones for server allocation.

A server-group may be associated with a list of networks.

A server-group can contain other server-groups.

The practice of locating physical servers in a number of racks or enclosures in a data center is common. Such racks generally provide a degree of physical isolation that allows for separate power and/or network connectivity.

In the SUSE OpenStack Cloud model we support this configuration by allowing you to define a hierarchy of server-groups. Each server is associated with one server-group, normally at the bottom of the hierarchy.

Server-groups are an optional part of the input model - if you do not define any, then all servers and networks will be allocated as if they are part of the same server-group.

5.2.9.1 Server Groups and Failure Zones Edit source

A control-plane defines a list of server-groups as the failure zones from which it wants to use servers. All servers in a server-group listed as a failure zone in the control-plane and any server-groups they contain are considered part of that failure zone for allocation purposes. The following example shows how three levels of server-groups can be used to model a failure zone consisting of multiple racks, each of which in turn contains a number of servers.

When allocating servers, the configuration processor will traverse down the hierarchy of server-groups listed as failure zones until it can find an available server with the required server-role. If the allocation policy is defined to be strict, it will allocate servers equally across each of the failure zones. A cluster or resource-group can also independently specify the failure zones it wants to use if needed.

5.2.9.2 Server Groups and Networks Edit source

Each L3 network in a cloud must be associated with all or some of the servers, typically following a physical pattern (such as having separate networks for each rack or set of racks). This is also represented in the SUSE OpenStack Cloud model via server-groups, each group lists zero or more networks to which servers associated with server-groups at or below this point in the hierarchy are connected.

When the configuration processor needs to resolve the specific network a server should be configured to use, it traverses up the hierarchy of server-groups, starting with the group the server is directly associated with, until it finds a server-group that lists a network in the required network group.

The level in the server-group hierarchy at which a network is associated will depend on the span of connectivity it must provide. In the above example there might be networks in some network-groups which are per rack (that is Rack 1 and Rack 2 list different networks from the same network-group) and networks in a different network-group that span failure zones (the network used to provide floating IP addresses to virtual machines for example).

5.2.10 Networking Edit source

In addition to the mapping of services to specific clusters and resources we must also be able to define how the services connect to one or more networks.

In a simple cloud there may be a single L3 network but more typically there are functional and physical layers of network separation that need to be expressed.

Functional network separation provides different networks for different types of traffic; for example, it is common practice in even small clouds to separate the External APIs that users will use to access the cloud and the external IP addresses that users will use to access their virtual machines. In more complex clouds it is common to also separate out virtual networking between virtual machines, block storage traffic, and volume traffic onto their own sets of networks. In the input model, this level of separation is represented by network-groups.

Physical separation is required when there are separate L3 network segments providing the same type of traffic; for example, where each rack uses a different subnet. This level of separation is represented in the input model by the networks within each network-group.

5.2.10.1 Network Groups Edit source

Service endpoints attach to networks in a specific network-group.

Network-groups can define routes to other networks.

Network-groups encapsulate the configuration for services via network-tags

A network-group defines the traffic separation model and all of the properties that are common to the set of L3 networks that carry each type of traffic. They define where services are attached to the network model and the routing within that model.

In terms of service connectivity, all that has to be captured in the network-groups definition are the same service-component names that are used when defining control-planes. SUSE OpenStack Cloud also allows a default attachment to be used to specify "all service-components" that are not explicitly connected to another network-group. So, for example, to isolate Swift traffic, the swift-account, swift-container, and swift-object service components are attached to an "Object" network-group and all other services are connected to "MANAGEMENT" network-group via the default relationship.

Note
Note

The name of the "MANAGEMENT" network-group cannot be changed. It must be upper case. Every SUSE OpenStack Cloud requires this network group in order to be valid.

The details of how each service connects, such as what port it uses, if it should be behind a load balancer, if and how it should be registered in Keystone, and so forth, are defined in the service definition files provided by SUSE OpenStack Cloud.

In any configuration with multiple networks, controlling the routing is a major consideration. In SUSE OpenStack Cloud, routing is controlled at the network-group level. First, all networks are configured to provide the route to any other networks in the same network-group. In addition, a network-group may be configured to provide the route any other networks in the same network-group; for example, if the internal APIs are in a dedicated network-group (a common configuration in a complex network because a network group with load balancers cannot be segmented) then other network-groups may need to include a route to the internal API network-group so that services can access the internal API endpoints. Routes may also be required to define how to access an external storage network or to define a general default route.

As part of the SUSE OpenStack Cloud deployment, networks are configured to act as the default route for all traffic that was received via that network (so that response packets always return via the network the request came from).

Note that SUSE OpenStack Cloud will configure the routing rules on the servers it deploys and will validate that the routes between services exist in the model, but ensuring that gateways can provide the required routes is the responsibility of your network configuration. The configuration processor provides information about the routes it is expecting to be configured.

For a detailed description of how the configuration processor validates routes, refer to Section 7.6, “Network Route Validation”.

5.2.10.1.1 Load Balancers Edit source

Load-balancers provide a specific type of routing and are defined as a relationship between the virtual IP address (VIP) on a network in one network group and a set of service endpoints (which may be on networks in the same or a different network-group).

As each load-balancer is defined providing a virtual IP on a network-group, it follows that those network-groups can each only have one network associated to them.

The load-balancer definition includes a list of service-components and endpoint roles it will provide a virtual IP for. This model allows service-specific load-balancers to be defined on different network-groups. A "default" value is used to express "all service-components" which require a virtual IP address and are not explicitly configured in another load-balancer configuration. The details of how the load-balancer should be configured for each service, such as which ports to use, how to check for service liveness, etc., are provided in the SUSE OpenStack Cloud supplied service definition files.

Where there are multiple instances of a service (for example, in a cloud with multiple control-planes), each control-plane needs its own set of virtual IP address and different values for some properties such as the external name and security certificate. To accommodate this in SUSE OpenStack Cloud 8, load-balancers are defined as part of the control-plane, with the network groups defining just which load-balancers are attached to them.

Load balancers are always implemented by an ha-proxy service in the same control-plane as the services.

5.2.10.1.2 Separation of Public, Admin, and Internal Endpoints Edit source

The list of endpoint roles for a load-balancer make it possible to configure separate load-balancers for public and internal access to services, and the configuration processor uses this information to both ensure the correct registrations in Keystone and to make sure the internal traffic is routed to the correct endpoint. SUSE OpenStack Cloud services are configured to only connect to other services via internal virtual IP addresses and endpoints, allowing the name and security certificate of public endpoints to be controlled by the customer and set to values that may not be resolvable/accessible from the servers making up the cloud.

Note that each load-balancer defined in the input model will be allocated a separate virtual IP address even when the load-balancers are part of the same network-group. Because of the need to be able to separate both public and internal access, SUSE OpenStack Cloud will not allow a single load-balancer to provide both public and internal access. Load-balancers in this context are logical entities (sets of rules to transfer traffic from a virtual IP address to one or more endpoints).

The following diagram shows a possible configuration in which the hostname associated with the public URL has been configured to resolve to a firewall controlling external access to the cloud. Within the cloud, SUSE OpenStack Cloud services are configured to use the internal URL to access a separate virtual IP address.

5.2.10.1.3 Network Tags Edit source

Network tags are defined by some SUSE OpenStack Cloud service-components and are used to convey information between the network model and the service, allowing the dependent aspects of the service to be automatically configured.

Network tags also convey requirements a service may have for aspects of the server network configuration, for example, that a bridge is required on the corresponding network device on a server where that service-component is installed.

See Section 6.13.2, “Network Tags” for more information on specific tags and their usage.

5.2.10.2 Networks Edit source

A network is part of a network-group.

Networks are fairly simple definitions. Each network defines the details of its VLAN, optional address details (CIDR, start and end address, gateway address), and which network-group it is a member of.

5.2.10.3 Interface Model Edit source

A server-role identifies an interface-model that describes how its network interfaces are to be configured and used.

Network groups are mapped onto specific network interfaces via an interface-model, which describes the network devices that need to be created (bonds, ovs-bridges, etc.) and their properties.

An interface-model acts like a template; it can define how some or all of the network-groups are to be mapped for a particular combination of physical NICs. However, it is the service-components on each server that determine which network-groups are required and hence which interfaces and networks will be configured. This means that interface-models can be shared between different server-roles. For example, an API role and a database role may share an interface model even though they may have different disk models and they will require a different subset of the network-groups.

Within an interface-model, physical ports are identified by a device name, which in turn is resolved to a physical port on a server basis via a nic-mapping. To allow different physical servers to share an interface-model, the nic-mapping is defined as a property of each server.

The interface-model can also used to describe how network devices are to be configured for use with DPDK, SR-IOV, and PCI Passthrough.

5.2.10.4 NIC Mapping Edit source

When a server has more than a single physical network port, a nic-mapping is required to unambiguously identify each port. Standard Linux mapping of ports to interface names at the time of initial discovery (for example, eth0, eth1, eth2, ...) is not uniformly consistent from server to server, so a mapping of PCI bus address to interface name is instead.

NIC mappings are also used to specify the device type for interfaces that are to be used for SR-IOV or PCI Passthrough. Each SUSE OpenStack Cloud release includes the data for the supported device types.

5.2.10.5 Firewall Configuration Edit source

The configuration processor uses the details it has about which networks and ports service-components use to create a set of firewall rules for each server. The model allows additional user-defined rules on a per network-group basis.

5.2.11 Configuration Data Edit source

Configuration Data is used to provide settings which have to be applied in a specific context, or where the data needs to be verified against or merged with other values in the input model.

For example, when defining a Neutron provider network to be used by Octavia, the network needs to be included in the routing configuration generated by the Configuration Processor.

Print this page