7 Other Topics #
7.1 Services and Service Components #
Type | Service | Service Components |
---|---|---|
Compute | ||
Virtual Machine Provisioning | nova |
nova-api nova-compute nova-compute-hyperv nova-compute-ironic nova-compute-kvm nova-conductor nova-console-auth nova-esx-compute-proxy nova-metadata nova-novncproxy nova-scheduler nova-scheduler-ironic nova-placement-api |
Bare Metal Provisioning | ironic |
ironic-api ironic-conductor |
Networking | ||
Networking | neutron |
infoblox-ipam-agent neutron-dhcp-agent neutron-l2gateway-agent neutron-l3-agent neutron-metadata-agent neutron-ml2-plugin neutron-openvswitch-agent neutron-ovsvapp-agent neutron-server neutron-sriov-nic-agent neutron-vpn-agent |
Network Load Balancer | octavia |
octavia-api octavia-health-manager |
Domain Name Service (DNS) | designate |
designate-api designate-central designate-mdns designate-mdns-external designate-pool-manager designate-zone-manager |
Storage | ||
Block Storage | cinder |
cinder-api cinder-backup cinder-scheduler cinder-volume |
Object Storage | swift |
swift-account swift-common swift-container swift-object swift-proxy swift-ring-builder swift-rsync |
Image | ||
Image Management | glance |
glance-api glance-registry |
Security | ||
Key Management | barbican |
barbican-api barbican-worker |
Identity and Authentication | keystone |
keystone-api |
Orchestration | ||
Orchestration | heat |
heat-api heat-api-cfn heat-api-cloudwatch heat-engine |
Operations | ||
Telemetry | ceilometer |
ceilometer-agent-notification ceilometer-common ceilometer-polling |
Cloud Lifecycle Manager | ardana |
ardana-ux-services lifecycle-manager lifecycle-manager-target |
Dashboard | horizon |
horizon |
Centralized Logging | logging |
logging-api logging-producer logging-rotate logging-server |
Monitoring | monasca |
monasca-agent monasca-api monasca-dashboard monasca-liveness-check monasca-notifier monasca-persister monasca-threshold monasca-transform |
Operations Console | operations |
ops-console-web |
Openstack Functional Test Suite | tempest |
tempest |
Foundation | ||
OpenStack Clients | clients |
barbican-client cinder-client designate-client glance-client heat-client ironic-client keystone-client monasca-client neutron-client nova-client openstack-client swift-client |
Supporting Services | foundation |
apache2 bind bind-ext influxdb ip-cluster kafka memcached mysql ntp-client ntp-server openvswitch rabbitmq spark storm cassandra zookeeper |
7.2 Name Generation #
Names are generated by the configuration processor for all allocated IP addresses. A server connected to multiple networks will have multiple names associated with it. One of these may be assigned as the hostname for a server via the network-group configuration (see Section 6.12, “NIC Mappings”). Names are generated from data taken from various parts of the input model as described in the following sections.
Clusters#
Names generated for servers in a cluster have the following form:
CLOUD-CONTROL-PLANE-CLUSTERMEMBER-PREFIXMEMBER_ID-NETWORK
Example: ardana-cp1-core-m1-mgmt
Name | Description |
---|---|
CLOUD | Comes from the hostname-data section of the Section 6.1, “Cloud Configuration”) | object (see
CONTROL-PLANE | is the Section 6.2, “Control Plane”) | prefix or name (see
CLUSTER | is the Section 6.2.1, “ Clusters”) | name (see
member-prefix | comes from the hostname-data section of the Section 6.1, “Cloud Configuration”) | object (see
member_id | is the ordinal within the cluster, generated by the configuration processor as servers are allocated to the cluster |
network | comes from the Section 6.12, “NIC Mappings”). | of the network group to which the network belongs (see
Resource Nodes#
Names generated for servers in a resource group have the following form:
CLOUD-CONTROL-PLANE-RESOURCE-PREFIXMEMBER_ID-NETWORK
Example: ardana-cp1-comp0001-mgmt
Name | Description |
---|---|
CLOUD | comes from the hostname-data section of the Section 6.1, “Cloud Configuration”). | object (see
CONTROL-PLANE | is the Section 6.2, “Control Plane”). | prefix or name (see
RESOURCE-PREFIX | is the Section 6.2.2, “Resources”). | value name (see
MEMBER_ID | is the ordinal within the cluster, generated by the configuration processor as servers are allocated to the cluster, padded with leading zeroes to four digits. |
NETWORK | comes from the Section 6.12, “NIC Mappings”) | of the network group to which the network belongs to (see
7.3 Persisted Data #
The configuration processor makes allocation decisions on servers and IP addresses which it needs to remember between successive runs so that if new servers are added to the input model they do not disrupt the previously deployed allocations.
To allow users to make multiple iterations of the input model before deployment SUSE OpenStack Cloud will only persist data when the administrator confirms that they are about to deploy the results via the "ready-deployment" operation. To understand this better, consider the following example:
Imagine you have completed your SUSE OpenStack Cloud deployment with servers A, B, and C and you want to add two new compute nodes by adding servers D and E to the input model.
When you add these to the input model and re-run the configuration processor it will read the persisted data for A, B, and C and allocate D and E as new servers. The configuration processor now has allocation data for A, B, C, D, and E -- which it keeps in a staging area (actually a special branch in Git) until we get confirmation that the configuration processor has done what you intended and you are ready to deploy the revised configuration.
If you notice that the role of E is wrong and it became a swift node instead of a nova node you need to be able to change the input model and re-run the configuration processor. This is fine because the allocations of D and E have not been confirmed, and so the configuration processor will re-read the data about A, B, C and re-allocate D and E now to the correct clusters, updating the persisted data in the staging area.
You can loop though this as many times as needed. Each time, the
configuration processor is processing the deltas to what is deployed, not the
results of the previous run. When you are ready to use the results of the
configuration processor, you run ready-deployment.yml
which commits the data in the staging area into the persisted data. The next
run of the configuration processor will then start from the persisted data
for A, B, C, D, and E.
7.3.1 Persisted Server Allocations #
Server allocations are persisted by the administrator-defined server ID (see Section 6.5, “Servers”), and include the control plane, cluster/resource name, and ordinal within the cluster or resource group.
To guard against data loss, the configuration processor persists server allocations even when the server ID no longer exists in the input model -- for example, if a server was removed accidentally and the configuration processor allocated a new server to the same ordinal, then it would be very difficult to recover from that situation.
The following example illustrates the behavior:
A cloud is deployed with four servers with IDs of A, B, C, and D that can
all be used in a resource group with min-size=0
and
max-size=3
. At the end of this deployment they persisted
state is as follows:
ID | Control Plane | Resource Group | Ordinal | State | Deployed As |
---|---|---|---|---|---|
A | ccp | compute | 1 | Allocated | mycloud-ccp-comp0001 |
B | ccp | compute | 2 | Allocated | mycloud-ccp-comp0002 |
C | ccp | compute | 3 | Allocated | mycloud-ccp-comp0003 |
D | Available |
(In this example server D has not been allocated because the group is at its max size, and there are no other groups that required this server)
If server B is removed from the input model and the configuration processor is re-run, the state is changed to:
ID | Control Plane | Resource Group | Ordinal | State | Deployed As |
---|---|---|---|---|---|
A | ccp | compute | 1 | Allocated | mycloud-ccp-comp0001 |
B | ccp | compute | 2 | Deleted | |
C | ccp | compute | 3 | Allocated | mycloud-ccp-comp0003 |
D | ccp | compute | 4 | Allocated | mycloud-ccp-comp0004 |
The details associated with server B are still retained, but the configuration processor will not generate any deployment data for this server. Server D has been added to the group to meet the minimum size requirement but has been given a different ordinal and hence will get different names and IP addresses than were given to server B.
If server B is added back into the input model the resulting state will be:
ID | Control Plane | Resource Group | Ordinal | State | Deployed As |
---|---|---|---|---|---|
A | ccp | compute | 1 | Allocated | mycloud-ccp-comp0001 |
B | ccp | compute | 2 | Deleted | |
C | ccp | compute | 3 | Allocated | mycloud-ccp-comp0003 |
D | ccp | compute | 4 | Allocated | mycloud-ccp-comp0004 |
The configuration processor will issue a warning that server B cannot be returned to the compute group because it would exceed the max-size constraint. However, because the configuration processor knows that server B is associated with this group it will not allocate it to any other group that could use it, since that might lead to data loss on that server.
If the max-size value of the group was increased, then server B would be
allocated back to the group, with its previous name and addresses
(mycloud-cp1-compute0002
).
Note that the configuration processor relies on the server ID to identify a physical server. If the ID value of a server is changed the configuration processor will treat it as a new server. Conversely, if a different physical server is added with the same ID as a deleted server the configuration processor will assume that it is the original server being returned to the model.
You can force the removal of persisted data for servers that are no longer
in the input model by running the configuration processor with the
remove_deleted_servers
option, like below:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost config-processor-run.yml \ -e remove_deleted_servers="y"
7.3.2 Persisted Address Allocations #
The configuration processor persists IP address allocations by the generated name (see Section 7.2, “Name Generation” for how names are generated). As with servers. once an address has been allocated that address will remain allocated until the configuration processor is explicitly told that it is no longer required. The configuration processor will generate warnings for addresses that are persisted but no longer used.
You can remove persisted address allocations that are no longer used in the
input model by running the configuration processor with the
free_unused_addresses
option, like below:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost config-processor-run.yml \ -e free_unused_addresses="y"
7.4 Server Allocation #
The configuration processor allocates servers to a cluster or resource group in the following sequence:
Any
that are persisted with a state of "allocated" are first returned to the or . Such servers are always allocated even if this contradicts the cluster size, failure-zones, or list of server roles since it is assumed that these servers are actively deployed.If the
or is still below its minimum size, then any that are persisted with a state of "deleted", but where the server is now listed in the input model (that is, the server was removed but is now back), are added to the group providing they meet the and criteria. If they do not meet the criteria then a warning is given and the remains in a deleted state (that is, it is still not allocated to any other cluster or group). These are not part of the current deployment, and so you must resolve any conflicts before they can be redeployed.If the
or is still below its minimum size, the configuration processor will allocate additional that meet the and criteria. If the allocation policy is set to "strict" then the failure zones of servers already in the cluster or resource group are not considered until an equal number of servers has been allocated from each zone.
7.5 Server Network Selection #
Once the configuration processor has allocated a
to a or it uses the information in the associated to determine which need to be configured. It does this by:Looking at the
that are to run on the server (from the definition)Looking to see which
each of those components is attached to (from the definition)Looking to see if there are any
related to a running on this server, and if so, adding those to the list (also from the definition)Looking to see if there are any
that the says should be forced onto the serverIt then searches the Section 5.2.9.2, “Server Groups and Networks”) to find a in each of the it needs to attach to
hierarchy (as described in
If there is no
available to a server, either because the does not include the required , or there is no from that group in the appropriate part of the hierarchy, then the configuration processor will generate an error.The configuration processor will also generate an error if the
address does not match any of the networks it will be connected to.7.6 Network Route Validation #
Once the configuration processor has allocated all of the required
and matched them to the appropriate , it validates that all have the required network routes to other .It does this by using the data in the services section of the input model which provides details of which SUSE OpenStack Cloud release.
need to connect to each other. This data is not configurable by the administrator; however, it is provided as part of theFor each
, the configuration processor looks at the list of it runs and determines the network addresses of every other it needs to connect to (depending on the service, this might be a virtual IP address on a load balancer or a set of addresses for the service).If the target address is on a
that this is connected to, then there is no routing required. If the target address is on a different , then the Configuration Processor looks at each the server is connected to and looks at the routes defined in the corresponding . If the provides a route to the of the target address, then that route is considered valid.
routes
stanza of the
definition. Routes to a named
are always considered before a "default"
route.
A warning is given for any routes which are using the "default" route since it is possible that the user did not intend to route this traffic. Such warning can be removed by adding the appropriate
to the list of routes.
The configuration processor provides details of all routes between networks
that it is expecting to be configured in the
info/route_info.yml
file.
To illustrate how network routing is defined in the input model, consider the following example:
A compute server is configured to run nova-compute
which requires access to
the neutron API servers and a block storage service. The neutron API
servers have a virtual IP address provided by a load balancer in the
INTERNAL-API network-group and the storage service is connected to the ISCSI
network-group. nova-compute
itself is part of the set of components attached
by default to the MANAGEMENT network-group. The intention is to have virtual
machines on the compute server connect to the block storage via the ISCSI
network.
The physical network is shown below:
The corresponding entries in the
are:- name: INTERNAL-API hostname-suffix: intapi load-balancers: - provider: ip-cluster name: lb components: - default roles: - internal - admin - name: MANAGEMENT hostname-suffix: mgmt hostname: true component-endpoints: - default routes: - INTERNAL-API - default - name: ISCSI hostname-suffix: iscsi component-endpoints: - storage service
And the
for the compute server looks like this:- name: INTERFACE_SET_COMPUTE network-interfaces: - name: BOND0 device: name: bond0 bond-data: options: mode: active-backup miimon: 200 primary: hed5 provider: linux devices: - name: hed4 - name: hed5 network-groups: - MANAGEMENT - ISCSI
When validating the route from nova-compute
to the neutron API, the
configuration processor will detect that the target address is on a network
in the INTERNAL-API network group, and that the MANAGEMENT network (which is
connected to the compute server) provides a route to this network, and thus
considers this route valid.
When validating the route from nova-compute
to a storage service, the
configuration processor will detect that the target address is on a network
in the ISCSInetwork group. However, because there is no service component on
the compute server connected to the ISCSI network (according to the
network-group definition) the ISCSI network will not have been configured on
the compute server (see Section 7.5, “Server Network Selection”. The
configuration processor will detect that the MANAGEMENT network-group provides
a "default" route and thus considers the route as valid (it is, of course,
valid to route ISCSI traffic). However, because this is using the default
route, a warning will be issued:
# route-generator-2.0 WRN: Default routing used between networks The following networks are using a 'default' route rule. To remove this warning either add an explicit route in the source network group or force the network to attach in the interface model used by the servers. MANAGEMENT-NET-RACK1 to ISCSI-NET ardana-ccp-comp0001 MANAGEMENT-NET-RACK 2 to ISCSI-NET ardana-ccp-comp0002 MANAGEMENT-NET-RACK 3 to SCSI-NET ardana-ccp-comp0003
To remove this warning, you can either add ISCSI to the list of routes in the MANAGEMENT network group (routed ISCSI traffic is still a valid configuration) or force the compute server to attach to the ISCSI network-group by adding it as a forced-network-group in the interface-model, like this:
- name: INTERFACE_SET_COMPUTE network-interfaces: - name: BOND0 device: name: bond0 bond-data: options: mode: active-backup miimon: 200 primary: hed5 provider: linux devices: - name: hed4 - name: hed5 network-groups: - MANAGEMENT forced-network-groups: - ISCSI
With the attachment to the ISCSI network group forced, the configuration processor will attach the compute server to a network in that group and validate the route as either being direct or between networks in the same network-group.
The generated route_info.yml
file will include entries
such as the following, showing the routes that are still expected to be
configured between networks in the MANAGEMENT network group and the
INTERNAL-API network group.
MANAGEMENT-NET-RACK1: INTERNAL-API-NET: default: false used_by: nova-compute: neutron-server: - ardana-ccp-comp0001 MANAGEMENT-NET-RACK2: INTERNAL-API-NET: default: false used_by: nova-compute: neutron-server: - ardana-ccp-comp0003
7.7 Configuring neutron Provider VLANs #
neutron provider VLANs are networks that map directly to an 802.1Q VLAN in the cloud provider’s physical network infrastructure. There are four aspects to a provider VLAN configuration:
Network infrastructure configuration (for example, the top-of-rack switch)
Server networking configuration (for compute nodes and neutron network nodes)
neutron configuration file settings
Creation of the corresponding network objects in neutron
The physical network infrastructure must be configured to convey the provider VLAN traffic as tagged VLANs to the cloud compute nodes and neutron network nodes. Configuration of the physical network infrastructure is outside the scope of the SUSE OpenStack Cloud 9 software.
SUSE OpenStack Cloud 9 automates the server networking configuration and the neutron
configuration based on information in the cloud definition. To configure the
system for provider VLANs, specify the
neutron.networks.vlan
tag with a
provider-physical-network
attribute on one or more
as described in
Section 6.13.2, “Network Tags”. For example (some
attributes omitted for brevity):
network-groups: - name: NET_GROUP_A tags: - neutron.networks.vlan: provider-physical-network: physnet1 - name: NET_GROUP_B tags: - neutron.networks.vlan: provider-physical-network: physnet2
A Section 6.11, “Interface Models”. For example (some attributes omitted for brevity):
is associated with a server network interface via an as described ininterface-models: - name: INTERFACE_SET_X network-interfaces: - device: name: bond0 network-groups: - NET_GROUP_A - device: name: hed3 network-groups: - NET_GROUP_B
A SUSE OpenStack Cloud , because that VLAN must span all
compute nodes and any neutron network nodes/controllers (that is, it is a single
L2 segment). The SUSE OpenStack Cloud must be defined with
tagged-vlan: false
, otherwise a Linux VLAN network
interface will be created. For example:
networks: - name: NET_A tagged-vlan: false network-group: NET_GROUP_A - name: NET_B tagged-vlan: false network-group: NET_GROUP_B
When the cloud is deployed, SUSE OpenStack Cloud 9 will create the appropriate bridges on the servers, and set the appropriate attributes in the neutron configuration files (for example, bridge_mappings).
After the cloud has been deployed, create neutron network objects for each provider VLAN using the OpenStackClient CLI:
tux >
sudo openstack network create --provider:network_type vlan \
--provider:physical_network PHYSNET1 --provider:segmentation_id 101 MYNET101
tux >
sudo openstack network create --provider:network_type vlan \
--provider:physical_network PHYSNET2 --provider:segmentation_id 234 MYNET234
7.8 Standalone Cloud Lifecycle Manager #
All the example configurations use a “deployer-in-the-cloud”
scenario where the first controller is also the deployer/Cloud Lifecycle Manager. If you want
to use a standalone Cloud Lifecycle Manager, you need to add the relevant details in
control_plane.yml
, servers.yml
and
related configuration files. Detailed instructions are available at Section 12.1, “Using a Dedicated Cloud Lifecycle Manager Node”.