12 Deploying the OpenStack Services #
After the nodes are installed and configured you can start deploying the OpenStack components to finalize the installation. The components need to be deployed in a given order, because they depend on one another. The component for an HA setup is the only exception from this rule—it can be set up at any time. However, when deploying SUSE OpenStack Cloud Crowbar from scratch, we recommend deploying the proposal(s) first. Deployment for all components is done from the Crowbar Web interface through recipes, so-called “barclamps”. (See Section 12.24, “Roles and Services in SUSE OpenStack Cloud Crowbar” for a table of all roles and services, and how to start and stop them.)
The components controlling the cloud, including storage management and control components, need to be installed on the Control Node(s) (refer to Section 1.2, “The Control Node(s)” for more information). However, you may not use your Control Node(s) as a compute node or storage host for swift. Do not install the components and on the Control Node(s). These components must be installed on dedicated Storage Nodes and Compute Nodes.
When deploying an HA setup, the Control Nodes are replaced by one or more controller clusters consisting of at least two nodes, and three are recommended. We recommend setting up three separate clusters for data, services, and networking. See Section 2.6, “High Availability” for more information on requirements and recommendations for an HA setup.
The OpenStack components need to be deployed in the following order. For general instructions on how to edit and deploy barclamps, refer to Section 10.3, “Deploying Barclamp Proposals”. Any optional components that you elect to use must be installed in their correct order.
12.1 Deploying designate #
designate provides SUSE OpenStack Cloud Crowbar DNS as a Service (DNSaaS). It is used to
create and propagate zones and records over the network using pools of DNS
servers. Deployment defaults are in place, so not much is required to
configure designate. neutron needs additional settings for integration with
designate, which are also present in the [designate]
section in neutron configuration.
The designate barclamp relies heavily on the DNS barclamp and expects it to be applied without any failures.
In order to deploy designate, at least one node is necessary in the DNS barclamp that is not the admin node. The admin node is not added to the public network. So another node is needed that can be attached to the public network and appear in the designate default pool.
We recommend that DNS services are running in a cluster in highly available deployments where Designate services are running in a cluster. For example, in a typical HA deployment where the controllers are deployed in a 3-node cluster, the DNS barclamp should be applied to all the controllers, in the same manner as Designate.
- designate-server role
Installs the designate server packages and configures the mini-dns (mdns) service required by designate.
- designate-worker role
Configures a designate worker on the selected nodes. designate uses the workers to distribute its workload.
designate Sink
is an optional service and is not configured as part
of this barclamp.
designate uses pool(s) over which it can distribute zones and records. Pools can have varied configuration. Any misconfiguration can lead to information leakage.
The designate barclamp creates default Bind9 pool out of the box, which can be
modified later as needed. The default Bind9 pool configuration is created by Crowbar
on a node with designate-server
role in
/etc/designate/pools.crowbar.yaml
. You can copy
this file and edit it according to your requirements. Then provide this
configuration to designate using the command:
tux >
designate-manage pool update --file /etc/designate/pools.crowbar.yaml
The dns_domain
specified in neutron configuration in [designate]
section
is the default Zone where DNS records for neutron resources are created via
neutron-designate integration. If this is desired, you have to create this zone
explicitly using the following command:
ardana >
openstack zone create < email > < dns_domain >
Editing the designate proposal:
12.1.1 Using PowerDNS Backend #
Designate uses Bind9 backend by default. It is also possible to use PowerDNS backend in addition to, or as an alternative, to Bind9 backend. To do so PowerDNS must be manually deployed as The designate barclamp currently does not provide any facility to automatically install and configure PowerDNS. This section outlines the steps to deploy PowerDNS backend.
If PowerDNS is already deployed, you may skip the Section 12.1.1.1, “Install PowerDNS” section and jump to the Section 12.1.1.2, “Configure Designate To Use PowerDNS Backend” section.
12.1.1.1 Install PowerDNS #
Follow these steps to install and configure PowerDNS on a Crowbar node. Keep in mind that PowerDNS must be deployed with MySQL backend.
We recommend that PowerDNS are running in a cluster in highly availability deployments where Designate services are running in a cluster. For example, in a typical HA deployment where the controllers are deployed in a 3-node cluster, PowerDNS should be running on all the controllers, in the same manner as Designate.
Install PowerDNS packages.
root #
zypper install pdns pdns-backend-mysqlEdit
/etc/pdns/pdns.conf
and provide these options: (See https://doc.powerdns.com/authoritative/settings.html for a complete reference).- api
Set it to
yes
to enable Web service Rest API.- api-key
Static Rest API access key. Use a secure random string here.
- launch
Must set to
gmysql
to use MySQL backend.- gmysql-host
Hostname (i.e. FQDN) or IP address of the MySQL server.
- gmysql-user
MySQL user which have full access to the PowerDNS database.
- gmysql-password
Password for the MySQL user.
- gmysql-dbname
MySQL database name for PowerDNS.
- local-port
Port number where PowerDNS is listening for upcoming requests.
- setgid
The group where the PowerDNS process is running under.
- setuid
The user where the PowerDNS process is running under.
- webserver
Must set to
yes
to enable web service RestAPI.- webserver-address
Hostname (FQDN) or IP address of the PowerDNS web service.
- webserver-allow-from
List of IP addresses (IPv4 or IPv6) of the nodes that are permitted to talk to the PowerDNS web service. These must include the IP address of the Designate worker nodes.
For example:
api=yes api-key=Sfw234sDFw90z launch=gmysql gmysql-host=mysql.acme.com gmysql-user=powerdns gmysql-password=SuperSecured123 gmysql-dbname=powerdns local-port=54 setgid=pdns setuid=pdns webserver=yes webserver-address=192.168.124.83 webserver-allow-from=0.0.0.0/0,::/0
Login to MySQL from a Crowbar MySQL node and create the PowerDNS database and the user which has full access to the PowerDNS database. Remember, the database name, username, and password must match
gmysql-dbname
,gmysql-user
, andgmysql-password
that were specified above respectively.For example:
root #
mysql Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 20075 Server version: 10.2.29-MariaDB-log SUSE package Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE powerdns; Query OK, 1 row affected (0.01 sec) MariaDB [(none)]> GRANT ALL ON powerdns.* TO 'powerdns'@'localhost' IDENTIFIED BY 'SuperSecured123'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL ON powerdns.* TO 'powerdns'@'192.168.124.83' IDENTIFIED BY 'SuperSecured123'; Query OK, 0 rows affected, 1 warning (0.02 sec) MariaDB [(none)]> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.01 sec) MariaDB [(none)]> exit ByeCreate a MySQL schema file, named
powerdns-schema.sql
, with the following content:/* SQL statements to create tables in designate_pdns DB. Note: This file is taken as is from: https://raw.githubusercontent.com/openstack/designate/master/devstack/designate_plugins/backend-pdns4-mysql-db.sql */ CREATE TABLE domains ( id INT AUTO_INCREMENT, name VARCHAR(255) NOT NULL, master VARCHAR(128) DEFAULT NULL, last_check INT DEFAULT NULL, type VARCHAR(6) NOT NULL, notified_serial INT DEFAULT NULL, account VARCHAR(40) DEFAULT NULL, PRIMARY KEY (id) ) Engine=InnoDB; CREATE UNIQUE INDEX name_index ON domains(name); CREATE TABLE records ( id INT AUTO_INCREMENT, domain_id INT DEFAULT NULL, name VARCHAR(255) DEFAULT NULL, type VARCHAR(10) DEFAULT NULL, -- Changed to "TEXT", as VARCHAR(65000) is too big for most MySQL installs content TEXT DEFAULT NULL, ttl INT DEFAULT NULL, prio INT DEFAULT NULL, change_date INT DEFAULT NULL, disabled TINYINT(1) DEFAULT 0, ordername VARCHAR(255) BINARY DEFAULT NULL, auth TINYINT(1) DEFAULT 1, PRIMARY KEY (id) ) Engine=InnoDB; CREATE INDEX nametype_index ON records(name,type); CREATE INDEX domain_id ON records(domain_id); CREATE INDEX recordorder ON records (domain_id, ordername); CREATE TABLE supermasters ( ip VARCHAR(64) NOT NULL, nameserver VARCHAR(255) NOT NULL, account VARCHAR(40) NOT NULL, PRIMARY KEY (ip, nameserver) ) Engine=InnoDB; CREATE TABLE comments ( id INT AUTO_INCREMENT, domain_id INT NOT NULL, name VARCHAR(255) NOT NULL, type VARCHAR(10) NOT NULL, modified_at INT NOT NULL, account VARCHAR(40) NOT NULL, -- Changed to "TEXT", as VARCHAR(65000) is too big for most MySQL installs comment TEXT NOT NULL, PRIMARY KEY (id) ) Engine=InnoDB; CREATE INDEX comments_domain_id_idx ON comments (domain_id); CREATE INDEX comments_name_type_idx ON comments (name, type); CREATE INDEX comments_order_idx ON comments (domain_id, modified_at); CREATE TABLE domainmetadata ( id INT AUTO_INCREMENT, domain_id INT NOT NULL, kind VARCHAR(32), content TEXT, PRIMARY KEY (id) ) Engine=InnoDB; CREATE INDEX domainmetadata_idx ON domainmetadata (domain_id, kind); CREATE TABLE cryptokeys ( id INT AUTO_INCREMENT, domain_id INT NOT NULL, flags INT NOT NULL, active BOOL, content TEXT, PRIMARY KEY(id) ) Engine=InnoDB; CREATE INDEX domainidindex ON cryptokeys(domain_id); CREATE TABLE tsigkeys ( id INT AUTO_INCREMENT, name VARCHAR(255), algorithm VARCHAR(50), secret VARCHAR(255), PRIMARY KEY (id) ) Engine=InnoDB; CREATE UNIQUE INDEX namealgoindex ON tsigkeys(name, algorithm);
Create the PowerDNS schema for the database using
mysql
CLI. For example:root #
mysql powerdns < powerdns-schema.sqlEnable
pdns
systemd service.root #
systemctl enable pdnsroot #
systemctl start pdnsIf
pdns
is successfully running, you should see the following logs by runningjournalctl -u pdns
command.Feb 07 01:44:12 d52-54-77-77-01-01 systemd[1]: Started PowerDNS Authoritative Server. Feb 07 01:44:12 d52-54-77-77-01-01 pdns_server[21285]: Done launching threads, ready to distribute questions
12.1.1.2 Configure Designate To Use PowerDNS Backend #
Configure Designate to use PowerDNS backend by appending the PowerDNS
servers to /etc/designate/pools.crowbar.yaml
file
on a Designate worker node.
If we are replacing Bind9 backend with PowerDNS backend, make sure to
remove the bind9
entries from
/etc/designate/pools.crowbar.yaml
.
In HA deployment, there should be multiple PowerDNS entries.
Also, make sure the api_token
matches the
api-key
that was specified in the
/etc/pdns/pdns.conf
file earlier.
Append the PowerDNS entries to the end of
/etc/designate/pools.crowbar.yaml
. For example:
--- - name: default-bind description: Default BIND9 Pool id: 794ccc2c-d751-44fe-b57f-8894c9f5c842 attributes: {} ns_records: - hostname: public-d52-54-77-77-01-01.virtual.cloud.suse.de. priority: 1 - hostname: public-d52-54-77-77-01-02.virtual.cloud.suse.de. priority: 1 nameservers: - host: 192.168.124.83 port: 53 - host: 192.168.124.81 port: 53 also_notifies: [] targets: - type: bind9 description: BIND9 Server masters: - host: 192.168.124.83 port: 5354 - host: 192.168.124.82 port: 5354 - host: 192.168.124.81 port: 5354 options: host: 192.168.124.83 port: 53 rndc_host: 192.168.124.83 rndc_port: 953 rndc_key_file: "/etc/designate/rndc.key" - type: bind9 description: BIND9 Server masters: - host: 192.168.124.83 port: 5354 - host: 192.168.124.82 port: 5354 - host: 192.168.124.81 port: 5354 options: host: 192.168.124.81 port: 53 rndc_host: 192.168.124.81 rndc_port: 953 rndc_key_file: "/etc/designate/rndc.key" - type: pdns4 description: PowerDNS4 DNS Server masters: - host: 192.168.124.83 port: 5354 - host: 192.168.124.82 port: 5354 - host: 192.168.124.81 port: 5354 options: host: 192.168.124.83 port: 54 api_endpoint: http://192.168.124.83:8081 api_token: Sfw234sDFw90z
Update the pools using designate-manage
CLI.
tux >
designate-manage pool update --file /etc/designate/pools.crowbar.yaml
Once Designate sync up with PowerDNS, you should see the domains in the PowerDNS database which reflects the zones in Designate.
It make take a few minutes for Designate to sync with PowerDNS.
We can verify that the domains are successfully sync up with Designate
by inpsecting the domains
table in the database.
For example:
root #
mysql powerdns
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 21131
Server version: 10.2.29-MariaDB-log SUSE package
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [powerdns]> select * from domains;
+----+---------+--------------------------------------------------------------+------------+-------+-----------------+---------+
| id | name | master | last_check | type | notified_serial | account |
+----+---------+--------------------------------------------------------------+------------+-------+-----------------+---------+
| 1 | foo.bar | 192.168.124.81:5354 192.168.124.82:5354 192.168.124.83:5354 | NULL | SLAVE | NULL | |
+----+---------+--------------------------------------------------------------+------------+-------+-----------------+---------+
1 row in set (0.00 sec)
12.2 Deploying Pacemaker (Optional, HA Setup Only) #
To make the SUSE OpenStack Cloud controller functions and the Compute Nodes highly available, set up one or more clusters by deploying Pacemaker (see Section 2.6, “High Availability” for details). Since it is possible (and recommended) to deploy more than one cluster, a separate proposal needs to be created for each cluster.
Deploying Pacemaker is optional. In case you do not want to deploy it, skip this section and start the node deployment by deploying the database as described in Section 12.3, “Deploying the Database”.
To set up a cluster, at least two nodes are required. See Section 2.6.5, “Cluster Requirements and Recommendations” for more information.
To create a proposal, go to
› and click for the Pacemaker barclamp. A drop-down box where you can enter a name and a description for the proposal opens. Click to open the configuration screen for the proposal.The name you enter for the proposal will be used to generate host names for the virtual IP addresses of HAProxy. By default, the names follow this scheme:
cluster-PROPOSAL_NAME.FQDN
(for the internal name) |
public-cluster-PROPOSAL_NAME.FQDN
(for the public name) |
For example, when PROPOSAL_NAME is set to
data
, this results in the following names:
cluster-data.example.com
|
public-cluster-data.example.com
|
For requirements regarding SSL encryption and certificates, see Section 2.3, “SSL Encryption”.
The following options are configurable in the Pacemaker configuration screen:
- Transport for Communication
Choose a technology used for cluster communication. You can choose between
, sending a message to multiple destinations, or , sending a message to a single destination. By default unicast is used.Whenever communication fails between one or more nodes and the rest of the cluster a “cluster partition” occurs. The nodes of a cluster are split in partitions but are still active. They can only communicate with nodes in the same partition and are unaware of the separated nodes. The cluster partition that has the majority of nodes is defined to have “quorum”.
This configuration option defines what to do with the cluster partition(s) that do not have the quorum. See https://documentation.suse.com/sle-ha/15-SP1/single-html/SLE-HA-guide/#sec-conf-hawk2-cluster-config, for details.
The recommended setting is to choose
. However, is enforced for two-node clusters to ensure that the remaining node continues to operate normally in case the other node fails. For clusters using shared resources, choosing may be used to ensure that these resources continue to be available.- STONITH: Configuration mode for STONITH
“Misbehaving” nodes in a cluster are shut down to prevent them from causing trouble. This mechanism is called STONITH (“Shoot the other node in the head”). STONITH can be configured in a variety of ways, refer to https://documentation.suse.com/sle-ha/15-SP1/single-html/SLE-HA-guide/#cha-ha-fencing for details. The following configuration options exist:
STONITH will not be configured when deploying the barclamp. It needs to be configured manually as described in https://documentation.suse.com/sle-ha/15-SP1/single-html/SLE-HA-guide/#cha-ha-fencing. For experts only.
Using this option automatically sets up STONITH with data received from the IPMI barclamp. Being able to use this option requires that IPMI is configured for all cluster nodes. This should be done by default. To check or change the IPMI deployment, go to
› › › . Also make sure the option is set to on this barclamp.Important: STONITH Devices Must Support IPMITo configure STONITH with the IPMI data, all STONITH devices must support IPMI. Problems with this setup may occur with IPMI implementations that are not strictly standards compliant. In this case it is recommended to set up STONITH with STONITH block devices (SBD).
This option requires manually setting up shared storage and a watchdog on the cluster nodes before applying the proposal. To do so, proceed as follows:
Prepare the shared storage. The path to the shared storage device must be persistent and consistent across all nodes in the cluster. The SBD device must not use host-based RAID or cLVM2.
Install the package
sbd
on all cluster nodes.Initialize the SBD device with by running the following command. Make sure to replace
/dev/SBD
with the path to the shared storage device.sbd -d /dev/SBD create
Refer to https://documentation.suse.com/sle-ha/15-SP1/single-html/SLE-HA-guide/#sec-ha-storage-protect-test for details.
In
, specify the respective kernel module to be used. Find the most commonly used watchdog drivers in the following table:Hardware Driver HP hpwdt
Dell, Fujitsu, Lenovo (Intel TCO) iTCO_wdt
Generic softdog
If your hardware is not listed above, either ask your hardware vendor for the right name or check the following directory for a list of choices:
/lib/modules/KERNEL_VERSION/kernel/drivers/watchdog
.Alternatively, list the drivers that have been installed with your kernel version:
root #
rpm
-ql kernel-VERSION |grep
watchdogIf the nodes need different watchdog modules, leave the text box empty.
After the shared storage has been set up, specify the path using the “by-id” notation (
/dev/disk/by-id/DEVICE
). It is possible to specify multiple paths as a comma-separated list.Deploying the barclamp will automatically complete the SBD setup on the cluster nodes by starting the SBD daemon and configuring the fencing resource.
All nodes will use the identical configuration. Specify the
to use and enter for the agent.To get a list of STONITH devices which are supported by the High Availability Extension, run the following command on an already installed cluster nodes:
stonith -L
. The list of parameters depends on the respective agent. To view a list of parameters use the following command:stonith -t agent -n
All nodes in the cluster use the same
, but can be configured with different parameters. This setup is, for example, required when nodes are in different chassis and therefore need different IPMI parameters.To get a list of STONITH devices which are supported by the High Availability Extension, run the following command on an already installed cluster nodes:
stonith -L
. The list of parameters depends on the respective agent. To view a list of parameters use the following command:stonith -t agent -n
Use this setting for completely virtualized test installations. This option is not supported.
- STONITH: Do not start corosync on boot after fencing
With STONITH, Pacemaker clusters with two nodes may sometimes hit an issue known as STONITH deathmatch where each node kills the other one, resulting in both nodes rebooting all the time. Another similar issue in Pacemaker clusters is the fencing loop, where a reboot caused by STONITH will not be enough to fix a node and it will be fenced again and again.
This setting can be used to limit these issues. When set to OpenStack Cloud operator. When set to , the services for Pacemaker will always be started on boot. The value is used to have the most appropriate value automatically picked: it will be for two-node clusters (to avoid STONITH deathmatches), and otherwise.
, a node that has not been properly shut down or rebooted will not start the services for Pacemaker on boot. Instead, the node will wait for action from the SUSEWhen a node boots but not starts corosync because of this setting, then the node's status is in the
is set to "Problem
" (red dot).- Mail Notifications: Enable Mail Notifications
Get notified of cluster node failures via e-mail. If set to
, you need to specify which to use, a prefix for the mails' subject and sender and recipient addresses. Note that the SMTP server must be accessible by the cluster nodes.The public name is the host name that will be used instead of the generated public name (see Important: Proposal Name) for the public virtual IP address of HAProxy. (This is the case when registering public endpoints, for example). Any name specified here needs to be resolved by a name server placed outside of the SUSE OpenStack Cloud network.
The Pacemaker component consists of the following roles. Deploying the
role is optional:Deploy this role on all nodes that should become member of the cluster.
Deploying this role is optional. If deployed, sets up the Hawk Web interface which lets you monitor the status of the cluster. The Web interface can be accessed via
https://IP-ADDRESS:7630
. The default hawk credentials are usernamehacluster
, passwordcrowbar
.The password is visible and editable in the
view of the Pacemaker barclamp, and also in the"corosync":
section of the view.Note that the GUI on SUSE OpenStack Cloud can only be used to monitor the cluster status and not to change its configuration.
Deploy this role on all nodes that should become members of the Compute Nodes cluster. They will run as Pacemaker remote nodes that are controlled by the cluster, but do not affect quorum. Instead of the complete cluster stack, only the
pacemaker-remote
component will be installed on this nodes.
After a cluster has been successfully deployed, it is listed under
in the section and can be used for role deployment like a regular node.When using clusters, roles from other barclamps must never be deployed to single nodes that are already part of a cluster. The only exceptions from this rule are the following roles:
cinder-volume
swift-proxy + swift-dispersion
swift-ring-compute
swift-storage
After a role has been deployed on a cluster, its services are managed by the HA software. You must never manually start or stop an HA-managed service, nor configure it to start on boot. Services may only be started or stopped by using the cluster management tools Hawk or the crm shell. See https://documentation.suse.com/sle-ha/15-SP1/single-html/SLE-HA-guide/#sec-ha-config-basics-resources for more information.
To check whether all cluster resources are running, either use the Hawk
Web interface or run the command crm_mon
-1r
. If it is not the case, clean up the respective
resource with crm
resource
cleanup
RESOURCE , so it gets
respawned.
Also make sure that STONITH correctly works before continuing with the SUSE OpenStack Cloud setup. This is especially important when having chosen a STONITH configuration requiring manual setup. To test if STONITH works, log in to a node on the cluster and run the following command:
pkill -9 corosync
In case STONITH is correctly configured, the node will reboot.
Before testing on a production cluster, plan a maintenance window in case issues should arise.
12.3 Deploying the Database #
The very first service that needs to be deployed is the
. The database component is using MariaDB and is used by all other components. It must be installed on a Control Node. The Database can be made highly available by deploying it on a cluster.The only attribute you may change is the maximum number of database connections (
). The default value should usually work—only change it for large deployments in case the log files show database connection failures.12.3.1 Deploying MariaDB #
Deploying the database requires the use of MariaDB
MariaDB back end features full HA support based on the Galera clustering technology. The HA setup requires an odd number of nodes. The recommended number of nodes is 3.
12.3.1.1 SSL Configuration #
SSL can be enabled with either a stand-alone or cluster deployment. The replication traffic between database nodes is not encrypted, whilst traffic between the database server(s) and clients are, so a separate network for the database servers is recommended.
Certificates can be provided, or the barcamp can generate self-signed
certificates. The certificate filenames are configurable in the
barclamp, and the directories /etc/mysql/ssl/certs
and /etc/mysql/ssl/private
to use the defaults will
need to be created before the barclamp is applied. The CA certificate
and the certificate for MariaDB to use both go into
/etc/mysql/ssl/certs
. The appropriate private key
for the certificate is placed into the
/etc/mysql/ssl/private
directory. As long as the
files are readable when the barclamp is deployed, permissions can be
tightened after a successful deployment once the appropriate UNIX
groups exist.
The Common Name (CN) for the SSL certificate must be fully
qualified server name
for single host deployments, and
cluster-cluster name
.full domain
name
for cluster deployments.
If certificate validation errors are causing issues with deploying
other barclamps (for example, when creating databases or users) you
can check the configuration with
mysql --ssl-verify-server-cert
which will perform
the same verification that Crowbar does when connecting to the
database server.
If certificates are supplied, the CA certificate and its full trust
chain must be in the ca.pem
file. The certificate
must be trusted by the machine (or all cluster members in a cluster
deployment), and it must be available on all client machines —
IE, if the OpenStack services are deployed on separate machines or
cluster members they will all require the CA certificate to be in
/etc/mysql/ssl/certs
as well as trusted by the
machine.
12.3.1.2 MariaDB Configuration Options #
The following configuration settings are available via the
barclamp graphical interface:- Datadir
Path to a directory for storing database data.
- Maximum Number of Simultaneous Connections
The maximum number of simultaneous client connections.
- Number of days after the binary logs can be automatically removed
A period after which the binary logs are removed.
- Slow Query Logging
When enabled, all queries that take longer than usual to execute are logged to a separate log file (by default, it's
/var/log/mysql/mysql_slow.log
). This can be useful for debugging.
When MariaDB is used as the database back end, the
role cannot be deployed to the node with the role. These two roles cannot coexist due to the fact that monasca uses its own MariaDB instance.12.4 Deploying RabbitMQ #
The RabbitMQ messaging system enables services to communicate with the other nodes via Advanced Message Queue Protocol (AMQP). Deploying it is mandatory. RabbitMQ needs to be installed on a Control Node. RabbitMQ can be made highly available by deploying it on a cluster. We recommend not changing the default values of the proposal's attributes.
Name of the default virtual host to be created and used by the RabbitMQ server (
default_vhost
configuration option inrabbitmq.config
).- Port
Port the RabbitMQ server listens on (
tcp_listeners
configuration option inrabbitmq.config
).- User
RabbitMQ default user (
default_user
configuration option inrabbitmq.config
).
12.4.1 HA Setup for RabbitMQ #
To make RabbitMQ highly available, deploy it on a cluster instead of a single Control Node. This also requires shared storage for the cluster that hosts the RabbitMQ data. We recommend using a dedicated cluster to deploy RabbitMQ together with the database, since both components require shared storage.
Deploying RabbitMQ on a cluster makes an additional
section available in the section of the proposal. Configure the in this section.12.4.2 SSL Configuration for RabbitMQ #
The RabbitMQ barclamp supports securing traffic via SSL. This is similar to the SSL support in other barclamps, but with these differences:
RabbitMQ can listen on two ports at the same time, typically port 5672 for unsecured and port 5671 for secured traffic.
The ceilometer pipeline for OpenStack swift cannot be passed SSL-related parameters. When SSL is enabled for RabbitMQ the ceilometer pipeline in swift is turned off, rather than sending it over an unsecured channel.
The following steps are the fastest way to set up and test a new SSL certificate authority (CA).
In the RabbitMQ barclamp set
to , and totrue
, then apply the barclamp. The barclamp will create a new CA, enter the correct settings in/etc/rabbitmq/rabbitmq.config
, and start RabbitMQ.Test your new CA with OpenSSL, substituting the hostname of your control node:
openssl s_client -connect d52-54-00-59-e5-fd:5671 [...] Verify return code: 18 (self signed certificate)
This outputs a lot of information, including a copy of the server's public certificate, protocols, ciphers, and the chain of trust.
The last step is to configure client services to use SSL to access the RabbitMQ service. (See https://www.rabbitmq.com/ssl.html for a complete reference).
It is preferable to set up your own CA. The best practice is to use a commercial certificate authority. You may also deploy your own self-signed certificates, provided that your cloud is not publicly-accessible, and only for your internal use. Follow these steps to enable your own CA in RabbitMQ and deploy it to SUSE OpenStack Cloud:
Configure the RabbitMQ barclamp to use the control node's certificate authority (CA), if it already has one, or create a CA specifically for RabbitMQ and configure the barclamp to use that. (See Section 2.3, “SSL Encryption”, and the RabbitMQ manual has a detailed howto on creating your CA at http://www.rabbitmq.com/ssl.html, with customizations for .NET and Java clients.)
Figure 12.6: SSL Settings for RabbitMQ Barclamp #
The configuration options in the RabbitMQ barclamp allow tailoring the barclamp to your SSL setup.
Set this to
to expose all of your configuration options.RabbitMQ's SSL listening port. The default is 5671.
When this is set to
true
, self-signed certificates are automatically generated and copied to the correct locations on the control node, and all other barclamp options are set automatically. This is the fastest way to apply and test the barclamp. Do not use this on production systems. When this is set tofalse
the remaining options are exposed.The location of your public root CA certificate.
The location of your private server key.
This goes with
. Set to to require clients to present SSL certificates to RabbitMQ.Trust client certificates presented by the clients that are signed by other CAs. You'll need to store copies of the CA certificates; see "Trust the Client's Root CA" at http://www.rabbitmq.com/ssl.html.
When this is set to
, clients validate the RabbitMQ server certificate with the file.Tells clients of RabbitMQ where to find the CA bundle that validates the certificate presented by the RabbitMQ server, when
is set to .
12.4.3 Configuring Clients to Send Notifications #
RabbitMQ has an option called Configure clients to send
notifications
. It defaults to false
, which
means no events will be sent. It is required to be set to
true
for ceilometer, monasca, and any other services
consuming notifications. When it is set to true
,
OpenStack services are configured to submit lifecycle audit events to the
notification
RabbitMQ queue.
This option should only be enabled if an active consumer is configured, otherwise events will accumulate on the RabbitMQ server, clogging up CPU, memory, and disk storage.
Any accumulation can be cleared by running:
$ rabbitmqctl -p /openstack purge_queue notifications.info $ rabbitmqctl -p /openstack purge_queue notifications.error
12.5 Deploying keystone #
keystone is another core component that is used by all other OpenStack components. It provides authentication and authorization services. keystone needs to be installed on a Control Node. keystone can be made highly available by deploying it on a cluster. You can configure the following parameters of this barclamp:
Set the algorithm used by keystone to generate the tokens. You can choose between
Fernet
(the default) orUUID
. Note that for performance and security reasons it is strongly recommended to useFernet
.Allows customizing the region name that crowbar is going to manage.
Tenant for the users. Do not change the default value of
openstack
.User name and password for the administrator.
Specify whether a regular user should be created automatically. Not recommended in most scenarios, especially in an LDAP environment.
User name and password for the regular user. Both the regular user and the administrator accounts can be used to log in to the SUSE OpenStack Cloud Dashboard. However, only the administrator can manage keystone users and access.
Figure 12.7: The keystone Barclamp #- SSL Support: Protocol
When you use the default value Section 2.3, “SSL Encryption” for background information and Section 11.4.6, “Enabling SSL” for installation instructions. The following additional configuration options will become available when choosing :
, public communication will not be encrypted. Choose to use SSL for encryption. SeeWhen set to
true
, self-signed certificates are automatically generated and copied to the correct locations. This setting is for testing purposes only and should never be used in production environments!- /
Location of the certificate key pair files.
Set this option to
true
when using self-signed certificates to disable certificate checks. This setting is for testing purposes only and should never be used in production environments!Specify the absolute path to the CA certificate. This field is mandatory, and leaving it blank will cause the barclamp to fail. To fix this issue, you have to provide the absolute path to the CA certificate, restart the
apache2
service, and re-deploy the barclamp.When the certificate is not already trusted by the pre-installed list of trusted root certificate authorities, you need to provide a certificate bundle that includes the root and all intermediate CAs.
Figure 12.8: The SSL Dialog #
12.5.1 Authenticating with LDAP #
keystone has the ability to separate identity back-ends by domains. SUSE OpenStack Cloud 9 uses this method for authenticating users.
The keystone barclamp sets up a MariaDB database by default. Configuring an LDAP back-end is done in the
view.Set
Then in the
section configure a map with domain names as keys, and configuration as values. In the default proposal the domain name key is , and the keys are the two required sections for an LDAP-based identity driver configuration, the section which sets the driver, and the section which sets the LDAP connection options. You may configure multiple domains, each with its own configuration.
You may make this available to horizon by setting
to in the horizon barclamp.Users in the LDAP-backed domain have to know the name of the domain in order to authenticate, and must use the keystone v3 API endpoint. (See the OpenStack manuals, Domain-specific Configuration and Integrate Identity with LDAP, for additional details.)
12.5.2 HA Setup for keystone #
Making keystone highly available requires no special configuration—it is sufficient to deploy it on a cluster.
12.5.3 OpenID Connect Setup for keystone #
keystone supports WebSSO by federating with an external identity provider (IdP) using auth_openidc module.
There are two steps to enable this feature:
Configure the
"federation"
and"openidc"
attributes for the Keystone Barclamp in Crowbar.Create the Identity Provider, Protocol, and Mapping resource in keystone using
.
12.5.3.1 keystone Barclamp Configuration #
Configurating OpenID Connect is done in the
view, under the section. The global attributes, namely and are not specific to OpenID Connect. Rather, they are designed to help facilitate WebSSO browser redirects with external IdPs in a complex cloud deployment environment.If the cloud deployment does not have any external proxies or load balancers, where the public keystone and horizon (Dashboard service) endpoints are directly managed by Crowbar,
and does not need to be provided. However, in a complex cloud deployment where the public Keystone and Horizon endpoints are handled by external load balancers or proxies, and they are not directly managed by Crowbar, and must be provided and they must correctly reflect the external public endpoints.To configure OpenID Connect, edit the attributes in the
subsection.Set
Provide the name for the
. This must be the same as the identity provider to be created in Keystone using the . For example, if the identity provider isfoo
, create the identity provider with the name via Openstack CLI (i.e. ).auth_openidc OIDCResponseType. In most cases, it should be
corresponds toid_token
.- corresponds to
- corresponds to
- corresponds to
- corresponds to
auth_openidc OIDCRedirectURI. In a cloud deployment where all the external endpoints are directly managed by Crowbar, this attribute can be left blank as it will be auto-populated by Crowbar. However, in a complex cloud deployment where the public Keystone endpoint is handled by an external load balancer or proxy, this attribute must reflect the external Keystone auth endpoint for the OpenID Connect IdP. For example,
corresponds to"https://keystone-public-endpoint.foo.com/v3/OS-FEDERATION/identity_providers/foo/protocols/openid/auth"
WarningSome OpenID Connect IdPs such as Google require the hostname in the
to be a public FQDN. In that case, the hostname in Keystone public endpoint must also be a public FQDN and must match the one specified in the .
12.5.3.2 Create Identity Provider, Protocol, and Mapping #
To fully enable OpenID Connect, the Identity Provider
,
Protocol
, and Mapping
for the given
IdP must be created in Keystone. This is done by using the
, on a controller node, and
using the Keystone admin credential.
Login to a controller node as
root
user.Use the Keystone admin credential.
source ~/.openrc
Create the Identity Provider. For example:
openstack identity provider create foo
WarningThe name of the Identity Provider must be exactly the same as the
attribute given when configuring Keystone in the previous section.Next, create the Mapping for the Identity Provider. Prior to creating the Mapping, one must fully grasp the intricacies of Mapping Combinations as it may have profound security implications if done incorrectly. Here's an example of a mapping file.
[ { "local": [ { "user": { "name": "{0}", "email": "{1}", "type": "ephemeral" }, "group": { "domain": { "name": "Default" }, "name": "openidc_demo" } } ], "remote": [ { "type": "REMOTE_USER" }, { "type": "HTTP_OIDC_EMAIL" } ] } ]
Once the mapping file is created, now create the mapping resource in Keystone. For example:
openstack mapping create --rule oidc_mapping.json oidc_mapping
Lastly, create the Protocol for the Identity Provider and its mapping. For OpenID Connect, the protocol name must be openid. For example:
openstack federation protocol create --identity-provider google --mapping oidc_mapping openid
12.6 Deploying monasca (Optional) #
monasca is an open-source monitoring-as-a-service solution that integrates with OpenStack. monasca is designed for scalability, high performance, and fault tolerance.
Accessing the
interface is not required for day-to-day operation. But as not all monasca settings are exposed in the barclamp graphical interface (for example, various performance tuneables), it is recommended to configure monasca in the mode. Below are the options that can be configured via the interface of the monasca barclamp.- keystone
Contains keystone credentials that the agents use to send metrics. Do not change these options, as they are configured by Crowbar.
- insecure
Specifies whether SSL certificates are verified when communicating with keystone. If set to
false
, theca_file
option must be specified.- ca_file
Specifies the location of a CA certificate that is used for verifying keystone's SSL certificate.
- log_dir
Path for storing log files. The specified path must exist. Do not change the default
/var/log/monasca-agent
path.- log_level
Agent's log level. Limits log messages to the specified level and above. The following levels are available: Error, Warning, Info (default), and Debug.
- check_frequency
Interval in seconds between running agents' checks.
- num_collector_threads
Number of simultaneous collector threads to run. This refers to the maximum number of different collector plug-ins (for example,
http_check
) that are allowed to run simultaneously. The default value1
means that plug-ins are run sequentially.- pool_full_max_retries
If a problem with the results from multiple plug-ins results blocks the entire thread pool (as specified by the
num_collector_threads
parameter), the collector exits, so it can be restarted by thesupervisord
. The parameterpool_full_max_retries
specifies when this event occurs. The collector exits when the defined number of consecutive collection cycles have ended with the thread pool completely full.- plugin_collect_time_warn
Upper limit in seconds for any collection plug-in's run time. A warning is logged if a plug-in runs longer than the specified limit.
- max_measurement_buffer_size
Maximum number of measurements to buffer locally if the monasca API is unreachable. Measurements will be dropped in batches, if the API is still unreachable after the specified number of messages are buffered. The default
-1
value indicates unlimited buffering. Note that a large buffer increases the agent's memory usage.- backlog_send_rate
Maximum number of measurements to send when the local measurement buffer is flushed.
- amplifier
Number of extra dimensions to add to metrics sent to the monasca API. This option is intended for load testing purposes only. Do not enable the option in production! The default
0
value disables the addition of dimensions.
- max_data_size_kb
Maximum payload size in kilobytes for a request sent to the monasca log API.
- num_of_logs
Maximum number of log entries the log agent sends to the monasca log API in a single request. Reducing the number increases performance.
- elapsed_time_sec
Time interval in seconds between sending logs to the monasca log API.
- delay
Interval in seconds for checking whether
elapsed_time_sec
has been reached.- keystone
keystone credentials the log agents use to send logs to the monasca log API. Do not change this option manually, as it is configured by Crowbar.
- bind_host
Interfaces
monasca-api
listens on. Do not change this option, as it is configured by Crowbar.- processes
Number of processes to spawn.
- threads
Number of WSGI worker threads to spawn.
- log_level
Log level for
openstack-monasca-api
. Limits log messages to the specified level and above. The following levels are available: Critical, Error, Warning, Info (default), Debug, and Trace.
- repo_dir
List of directories for storing Elasticsearch snapshots. Must be created manually and be writeable by the
elasticsearch
user. Must contain at least one entry in order for the snapshot functionality to work.- heap_size
Sets the heap size. We recommend setting heap size at 50% of the available memory, but not more than 31 GB. The default of 4 GB is likely too small and should be increased if possible.
- limit_memlock
The maximum size that may be locked into memory in bytes
- limit_nofile
The maximum number of open file descriptors
- limit_nproc
The maximum number of processes
- vm_max_map_count
The maximum number of memory map areas a process may have.
For instructions on creating an Elasticsearch snapshot, see Section 4.7.4, “Backup and Recovery”.
elasticsearch-curator
removes old and large
elasticsearch indices. The settings below determine its behavior.
- delete_after_days
Time threshold for deleting indices. Indices older the specified number of days are deleted. This parameter is unset by default, so indices are kept indefinitely.
- delete_after_size
Maximum size in megabytes of indices. Indices larger than the specified size are deleted. This parameter is unset by default, so indices are kept irrespective of their size.
- delete_exclude_index
List of indices to exclude from
elasticsearch-curator
runs. By default, only the.kibana
files are excluded.
- log_retention_hours
Number of hours for retaining log segments in Kafka's on-disk log. Messages older than the specified value are dropped.
- log_retention_bytes
Maximum size for Kafka's on-disk log in bytes. If the log grows beyond this size, the oldest log segments are dropped.
- topics
list of topics
metrics
events
alarm-state-transitions
alarm-notifications
retry-notifications
60-seconds-notifications
log
transformed-log
The following are options of every topic:
- replicas
Controls how many servers replicate each message that is written
- partitions
Controls how many logs the topic is sharded into
- config_options
Map of configuration options is described in the Apache Kafka documentation
These parameters only affect first time installations. Parameters may be changed after installation with scripts available from Apache Kafka.
Kafka does not support reducing the number of partitions for a topic.
- email_enabled
Enable or disable email alarm notifications.
- email_smtp_host
SMTP smarthost for sending alarm notifications.
- email_smtp_port
Port for the SMTP smarthost.
- email_smtp_user
User name for authenticating against the smarthost.
- email_smtp_password
Password for authenticating against the smarthost.
- email_smtp_from_address
Sender address for alarm notifications.
- influxdb_retention_policy
Number of days to keep metrics records in influxdb.
For an overview of all supported values, see https://docs.influxdata.com/influxdb/v1.1/query_language/database_management/#create-retention-policies-with-create-retention-policy.
- monitor_libvirt
The global switch for toggling libvirt monitoring. If set to true, libvirt metrics will be gathered on all libvirt based Compute Nodes. This setting is available in the Crowbar UI.
- monitor_ceph
The global switch for toggling Ceph monitoring. If set to true, Ceph metrics will be gathered on all Ceph-based Compute Nodes. This setting is available in Crowbar UI. If the Ceph cluster has been set up independently, Crowbar ignores this setting.
- cache_dir
The directory where monasca-agent will locally cache various metadata about locally running VMs on each Compute Node.
- customer_metadata
Specifies the list of instance metadata keys to be included as dimensions with customer metrics. This is useful for providing more information about an instance.
- disk_collection_period
Specifies a minimum interval in seconds for collecting disk metrics. Increase this value to reduce I/O load. If the value is lower than the global agent collection period (
check_frequency
), it will be ignored in favor of the global collection period.- max_ping_concurrency
Specifies the number of ping command processes to run concurrently when determining whether the VM is reachable. This should be set to a value that allows the plugin to finish within the agent's collection period, even if there is a networking issue. For example, if the expected number of VMs per Compute Node is 40 and each VM has one IP address, then the plugin will take at least 40 seconds to do the ping checks in the worst-case scenario where all pings fail (assuming the default timeout of 1 second). Increasing
max_ping_concurrency
allows the plugin to finish faster.- metadata
Specifies the list of nova side instance metadata keys to be included as dimensions with the cross-tenant metrics for the
project. This is useful for providing more information about an instance.- nova_refresh
Specifies the number of seconds between calls to the nova API to refresh the instance cache. This is helpful for updating VM hostname and pruning deleted instances from the cache. By default, it is set to 14,400 seconds (four hours). Set to 0 to refresh every time the Collector runs, or to None to disable regular refreshes entirely. In this case, the instance cache will only be refreshed when a new instance is detected.
- ping_check
Includes the entire ping command (without the IP address, which is automatically appended) to perform a ping check against instances. The
NAMESPACE
keyword is automatically replaced with the appropriate network namespace for the VM being monitored. Set to False to disable ping checks.- vnic_collection_period
Specifies a minimum interval in seconds for collecting disk metrics. Increase this value to reduce I/O load. If the value is lower than the global agent collection period (
check_frequency
), it will be ignored in favor of the global collection period.- vm_cpu_check_enable
Toggles the collection of VM CPU metrics. Set to true to enable.
- vm_disks_check_enable
Toggles the collection of VM disk metrics. Set to true to enable.
- vm_extended_disks_check_enable
Toggles the collection of extended disk metrics. Set to true to enable.
- vm_network_check_enable
Toggles the collection of VM network metrics. Set to true to enable.
- vm_ping_check_enable
Toggles ping checks for checking whether a host is alive. Set to true to enable.
- vm_probation
Specifies a period of time (in seconds) in which to suspend metrics from a newly-created VM. This is to prevent quickly-obsolete metrics in an environment with a high amount of instance churn (VMs created and destroyed in rapid succession). The default probation length is 300 seconds (5 minutes). Set to 0 to disable VM probation. In this case, metrics are recorded immediately after a VM is created.
- vnic_collection_period
Specifies a minimum interval in seconds for collecting VM network metrics. Increase this value to reduce I/O load. If the value is lower than the global agent collection period (check_frequency), it will be ignored in favor of the global collection period.
The monasca component consists of following roles:
- monasca-server
monasca server-side components that are deployed by Chef. Currently, this only creates keystone resources required by monasca, such as users, roles, endpoints, etc. The rest is left to the Ansible-based
monasca-installer
run by themonasca-master
role.- monasca-master
Runs the Ansible-based
monasca-installer
from the Crowbar node. The installer deploys the monasca server-side components to the node that has themonasca-server
role assigned to it. These components areopenstack-monasca-api
, andopenstack-monasca-log-api
, as well as all the back-end services they use.- monasca-agent
Deploys
openstack-monasca-agent
that is responsible for sending metrics tomonasca-api
on nodes it is assigned to.- monasca-log-agent
Deploys
openstack-monasca-log-agent
responsible for sending logs tomonasca-log-api
on nodes it is assigned to.
12.7 Deploying swift (optional) #
swift adds an object storage service to SUSE OpenStack Cloud for storing single files such as images or snapshots. It offers high data security by storing the data redundantly on a pool of Storage Nodes—therefore swift needs to be installed on at least two dedicated nodes.
To properly configure swift it is important to understand how it places the data. Data is always stored redundantly within the hierarchy. The swift hierarchy in SUSE OpenStack Cloud is formed out of zones, nodes, hard disks, and logical partitions. Zones are physically separated clusters, for example different server rooms each with its own power supply and network segment. A failure of one zone must not affect another zone. The next level in the hierarchy are the individual swift storage nodes (on which has been deployed), followed by the hard disks. Logical partitions come last.
swift automatically places three copies of each object on the highest hierarchy level possible. If three zones are available, then each copy of the object will be placed in a different zone. In a one zone setup with more than two nodes, the object copies will each be stored on a different node. In a one zone setup with two nodes, the copies will be distributed on different hard disks. If no other hierarchy element fits, logical partitions are used.
The following attributes can be set to configure swift:
Set to
true
to enable public access to containers.If set to true, a copy of the current version is archived each time an object is updated.
Number of zones (see above). If you do not have different independent installations of storage nodes, set the number of zones to
1
.Partition power. The number entered here is used to compute the number of logical partitions to be created in the cluster. The number you enter is used as a power of 2 (2^X).
We recommend using a minimum of 100 partitions per disk. To measure the partition power for your setup, multiply the number of disks from all swift nodes by 100, and then round up to the nearest power of two. Keep in mind that the first disk of each node is not used by swift, but rather for the operating system.
Example: 10 swift nodes with 5 hard disks each. Four hard disks on each node are used for swift, so there is a total of forty disks. 40 x 100 = 4000. The nearest power of two, 4096, equals 2^12. So the partition power that needs to be entered is
12
.Important: Value Cannot be Changed After the Proposal Has Been DeployedChanging the number of logical partition after swift has been deployed is not supported. Therefore the value for the partition power should be calculated from the maximum number of partitions this cloud installation is likely going to need at any point in time.
This option sets the number of hours before a logical partition is considered for relocation.
24
is the recommended value.The number of copies generated for each object. The number of replicas depends on the number of disks and zones.
Time (in seconds) after which to start a new replication process.
Shows debugging output in the log files when set to
true
.Choose whether to encrypt public communication (
) or not ( ). If you choose , you have two options. You can either or provide the locations for the certificate key pair files. Using self-signed certificates is for testing purposes only and should never be used in production environments!
Apart from the general configuration described above, the swift barclamp lets you also activate and configure
. The features these middlewares provide can be used via the swift command line client only. The Ratelimit and S3 middleware provide for the most interesting features, and we recommend enabling other middleware only for specific use-cases.Provides an S3 compatible API on top of swift.
Serve container data as a static Web site with an index file and optional file listings. See http://docs.openstack.org/developer/swift/middleware.html#staticweb for details.
This middleware requires setting
totrue
.Create URLs to provide time-limited access to objects. See http://docs.openstack.org/developer/swift/middleware.html#tempurl for details.
Upload files to a container via Web form. See http://docs.openstack.org/developer/swift/middleware.html#formpost for details.
Extract TAR archives into a swift account, and delete multiple objects or containers with a single request. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.bulk for details.
Interact with the swift API via Flash, Java, and Silverlight from an external network. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.crossdomain for details.
Translates container and account parts of a domain to path parameters that the swift proxy server understands. Can be used to create short URLs that are easy to remember, for example by rewriting
home.tux.example.com/$ROOT/tux/home/myfile
tohome.tux.example.com/myfile
. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.domain_remap for details.Throttle resources such as requests per minute to provide denial of service protection. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.ratelimit for details.
The swift component consists of four different roles. Deploying
is optional:The virtual object storage service. Install this role on all dedicated swift Storage Nodes (at least two), but not on any other node.
Warning: swift-storage Needs Dedicated MachinesNever install the swift-storage service on a node that runs other OpenStack components.
The ring maintains the information about the location of objects, replicas, and devices. It can be compared to an index that is used by various OpenStack components to look up the physical location of objects. must only be installed on a single node, preferably a Control Node.
The swift proxy server takes care of routing requests to swift. Installing a single instance of
on a Control Node is recommended. The role can be made highly available by deploying it on a cluster.Deploying
is optional. The swift dispersion tools can be used to test the health of the cluster. It creates a heap of dummy objects (using 1% of the total space available). The state of these objects can be queried using the swift-dispersion-report query. needs to be installed on a Control Node.
12.7.1 HA Setup for swift #
swift replicates by design, so there is no need for a special HA setup. Make sure to fulfill the requirements listed in Section 2.6.4.1, “swift—Avoiding Points of Failure”.
12.8 Deploying glance #
glance provides discovery, registration, and delivery services for virtual disk images. An image is needed to start an instance—it is its pre-installed root-partition. All images you want to use in your cloud to boot instances from, are provided by glance. glance must be deployed onto a Control Node. glance can be made highly available by deploying it on a cluster.
There are a lot of options to configure glance. The most important ones are explained below—for a complete reference refer to https://github.com/crowbar/crowbar-openstack/blob/master/glance.yml.
As of SUSE OpenStack Cloud Crowbar 7, the glance API v1 is no longer enabled by default. Instead, glance API v2 is used by default.
If you need to re-enable API v1 for compatibility reasons:
Switch to the
view of the glance barclamp.Search for the
enable_v1
entry and set it totrue
:"enable_v1": true
In new installations, this entry is set to
false
by default. When upgrading from an older version of SUSE OpenStack Cloud Crowbar it is set totrue
by default.Apply your changes.
Images are stored in an image file on the Control Node. .
Provides volume block storage to . SUSE OpenStack Cloud Crowbar. Use it to store images.
Provides an object storage service to . SUSE OpenStack Cloud Crowbar.
SUSE Enterprise Storage (based on Ceph) provides block storage service to . SUSE OpenStack Cloud Crowbar.
If you are using VMware as a hypervisor, it is recommended to use . for storing images. This will make starting VMware instances much faster.
If this is set to . , the API will communicate the direct URl of the image's back-end location to HTTP clients. Set to by default.
Depending on the storage back-end, there are additional configuration options available:
Only required if
is set to .Specify the directory to host the image file. The directory specified here can also be an NFS share. See Section 11.4.3, “Mounting NFS Shares on a Node” for more information.
Only required if
is set to .Set the name of the container to use for the images in swift.
Only required if
is set to .- RADOS User for CephX Authentication
If you are using an external Ceph cluster, specify the user you have set up for glance (see Section 11.4.4, “Using an Externally Managed Ceph Cluster” for more information).
- RADOS Pool for glance images
If you are using a SUSE OpenStack Cloud internal Ceph setup, the pool you specify here is created if it does not exist. If you are using an external Ceph cluster, specify the pool you have set up for glance (see Section 11.4.4, “Using an Externally Managed Ceph Cluster” for more information).
Only required if
is set to .Name or IP address of the vCenter server.
- /
vCenter login credentials.
A comma-separated list of datastores specified in the format: DATACENTER_NAME:DATASTORE_NAME
Specify an absolute path here.
Choose whether to encrypt public communication (SSL Support: Protocol for configuration details.
) or not ( ). If you choose , refer toEnable and configure image caching in this section. By default, image caching is disabled. You can see this the Raw view of your nova barclamp:
image_cache_manager_interval = -1
This option sets the number of seconds to wait between runs of the image cache manager. Disabling it means that the cache manager will not automatically remove the unused images from the cache, so if you have many glance images and are running out of storage you must manually remove the unused images from the cache. We recommend leaving this option disabled as it is known to cause issues, especially with shared storage. The cache manager may remove images still in use, e.g. when network outages cause synchronization problems with compute nodes.
If you wish to enable caching, re-enable it in a custom nova configuration file, for example
/etc/nova/nova.conf.d/500-nova.conf
. This sets the interval to four minutes:image_cache_manager_interval = 2400
See Chapter 14, Configuration Files for OpenStack Services for more information on custom configurations.
Learn more about glance's caching feature at http://docs.openstack.org/developer/glance/cache.html.
Shows debugging output in the log files when set to
.
12.8.1 HA Setup for glance #
glance can be made highly available by deploying it on a cluster. We strongly recommended doing this for the image data as well. The recommended way is to use swift or an external Ceph cluster for the image repository. If you are using a directory on the node instead (file storage back-end), you should set up shared storage on the cluster for it.
12.9 Deploying cinder #
cinder, the successor of nova-volume
, provides
volume block storage.
It adds persistent storage to an instance that persists until deleted,
contrary to ephemeral volumes that only persist while the instance is
running.
cinder can provide volume storage by using different back-ends such as local file, one or more local disks, Ceph (RADOS), VMware, or network storage solutions from EMC, EqualLogic, Fujitsu, NetApp or Pure Storage. Since SUSE OpenStack Cloud Crowbar 5, cinder supports using several back-ends simultaneously. It is also possible to deploy the same network storage back-end multiple times and therefore use different installations at the same time.
The attributes that can be set to configure cinder depend on the back-end. The only general option is SSL Support: Protocol for configuration details).
(seeWhen first opening the cinder barclamp, the default proposal—
—is already available for configuration. To optionally add a back-end, go to the section and choose a from the drop-down box. Optionally, specify the . This is recommended when deploying the same volume type more than once. Existing back-end configurations (including the default one) can be deleted by clicking the trashcan icon if no longer needed. Note that you must configure at least one back-end.(local disks) #
Choose whether to use the “Available disks” are all disks currently not used by the system. Note that one disk (usually
disk or disks./dev/sda
) of every block storage node is already used for the operating system and is not available for cinder.Specify a name for the cinder volume.
(EMC² Storage) #
- /
IP address and Port of the ECOM server.
- /
Login credentials for the ECOM server.
VMAX port groups that expose volumes managed by this back-end.
Unique VMAX array serial number.
Unique pool name within a given array.
Name of the FAST Policy to be used. When specified, volumes managed by this back-end are managed as under FAST control.
For more information on the EMC driver refer to the OpenStack documentation at http://docs.openstack.org/liberty/config-reference/content/emc-vmax-driver.html.
EqualLogic drivers are included as a technology preview and are not supported.
Select the protocol used to connect, either
or .- /
IP address and port of the ETERNUS SMI-S Server.
- /
Login credentials for the ETERNUS SMI-S Server.
Storage pool (RAID group) in which the volumes are created. Make sure that the RAID group on the server has already been created. If a RAID group that does not exist is specified, the RAID group is built from unused disk drives. The RAID level is automatically determined by the ETERNUS DX Disk storage system.
For information on configuring the Hitachi HUSVM back-end, refer to http://docs.openstack.org/ocata/config-reference/block-storage/drivers/hitachi-storage-volume-driver.html.
- /
SUSE OpenStack Cloud can use “Data ONTAP” in , or in . In vFiler will be configured, in vServer will be configured. The can be set to either or . Choose the driver and the protocol your NetApp is licensed for.
The management IP address for the 7-Mode storage controller, or the cluster management IP address for the clustered Data ONTAP.
Transport protocol for communicating with the storage controller or clustered Data ONTAP. Supported protocols are HTTP and HTTPS. Choose the protocol your NetApp is licensed for.
The port to use for communication. Port 80 is usually used for HTTP, 443 for HTTPS.
- /
Login credentials.
The vFiler unit to be used for provisioning of OpenStack volumes. This setting is only available in .
Provide a list of comma-separated volume names to be used for provisioning. This setting is only available when using iSCSI as storage protocol.
A list of available file systems on an NFS server. Enter your NFS mountpoints in the
form in this format: host:mountpoint -o options. For example:host1:/srv/nfs/share1 /mnt/nfs/share1 -o rsize=8192,wsize=8192,timeo=14,intr
IP address of the FlashArray management VIP
API token for access to the FlashArray
Enable or disable iSCSI CHAP authentication
For more information on the Pure Storage FlashArray driver refer to the OpenStack documentation at https://docs.openstack.org/ocata/config-reference/block-storage/drivers/pure-storage-driver.html.
(Ceph) #
Select Section 11.4.4, “Using an Externally Managed Ceph Cluster” for setup instructions).
, if you are using an external Ceph cluster (seeName of the pool used to store the cinder volumes.
Ceph user name.
Host name or IP address of the vCenter server.
- /
vCenter login credentials.
Provide a comma-separated list of cluster names.
Path to the directory used to store the cinder volumes.
Absolute path to the vCenter CA certificate.
Default value:
false
(the CA truststore is used for verification). Set this option totrue
when using self-signed certificates to disable certificate checks. This setting is for testing purposes only and must not be used in production environments!
Absolute path to the file to be used for block storage.
Maximum size of the volume file. Make sure not to overcommit the size, since it will result in data loss.
Specify a name for the cinder volume.
Using a file for block storage is not recommended for production systems, because of performance and data security reasons.
Lets you manually pick and configure a driver. Only use this option for testing purposes, as it is not supported.
The cinder component consists of two different roles:
The cinder controller provides the scheduler and the API. Installing
on a Control Node is recommended.The virtual block storage service. It can be installed on a Control Node. However, we recommend deploying it on one or more dedicated nodes supplied with sufficient networking capacity to handle the increase in network traffic.
12.9.1 HA Setup for cinder #
Both the
and the role can be deployed on a cluster.If you need to re-deploy
role from a single machine to a cluster environment, the following will happen: Volumes that are currently attached to instances will continue to work, but adding volumes to instances will not succeed.
To solve this issue, run the following script once on each node that
belongs to the /usr/bin/cinder-migrate-volume-names-to-cluster
.
The script is automatically installed by Crowbar on every machine or cluster that has a
role applied to it.In combination with Ceph or a network storage solution, deploying cinder in a cluster minimizes the potential downtime. For
to be applicable to a cluster, the role needs all cinder backends to be configured for non-local storage. If you are using local volumes or raw devices in any of your volume backends, you cannot apply to a cluster.12.10 Deploying neutron #
neutron provides network connectivity between interface devices managed by other OpenStack components (most likely nova). The service works by enabling users to create their own networks and then attach interfaces to them.
neutron must be deployed on a Control Node. You first need to choose a core plug-in—
or . Depending on your choice, more configuration options will become available.The
option lets you use an existing VMware NSX installation. Using this plugin is not a prerequisite for the VMware vSphere hypervisor support. However, it is needed when wanting to have security groups supported on VMware compute nodes. For all other scenarios, choose .The only global option that can be configured is SSL Support: Protocol for configuration details.
. Choose whether to encrypt public communication ( ) or not ( ). If choosing , refer to(Modular Layer 2) #
Select which mechanism driver(s) shall be enabled for the ml2 plugin. It is possible to select more than one driver by holding the Ctrl key while clicking. Choices are:
Supports GRE, VLAN and VXLAN networks (to be configured via the . setting). VXLAN is the default.
Supports VLANs only. Requires to specify the . .
Enables neutron to dynamically adjust the VLAN settings of the ports of an existing Cisco Nexus switch when instances are launched. It also requires . which will automatically be selected. With , must be added. This option also requires to specify the . See Appendix A, Using Cisco Nexus Switches with neutron for details.
vmware_dvs driver makes it possible to use neutron for networking in a VMware-based environment. Choosing . , automatically selects the required , , and drivers. In the view, it is also possible to configure two additional attributes: (clean up the DVS portgroups on the target vCenter Servers when neutron-server is restarted) and (create DVS portgroups corresponding to networks in advance, rather than when virtual machines are attached to these networks).
With the default setup, all intra-Compute Node traffic flows through the network Control Node. The same is true for all traffic from floating IPs. In large deployments the network Control Node can therefore quickly become a bottleneck. When this option is set to “talk” to each other. Distributed Virtual Routers (DVR) require the driver and will not work with the driver. For details on DVR refer to https://wiki.openstack.org/wiki/Neutron/DVR.
, network agents will be installed on all compute nodes. This will de-centralize the network traffic, since Compute Nodes will be able to directlyThis option is only available when having chosen the Ctrl key while clicking.
or the mechanism drivers. Options are , and . It is possible to select more than one driver by holding theWhen multiple type drivers are enabled, you need to select the
, that will be used for newly created provider networks. This also includes thenova_fixed
network, that will be created when applying the neutron proposal. When manually creating provider networks with theneutron
command, the default can be overwritten with the--provider:network_type type
switch. You will also need to set a . It is not possible to change this default when manually creating tenant networks with theneutron
command. The non-default type driver will only be used as a fallback.Depending on your choice of the type driver, more configuration options become available.
Having chosen . , you also need to specify the start and end of the tunnel ID range.
The option . requires you to specify the .
Having chosen . , you also need to specify the start and end of the VNI range.
neutron must not be deployed with the openvswitch with
gre
plug-in.
- xCAT Host/IP Address
Host name or IP address of the xCAT Management Node.
- xCAT Username/Password
xCAT login credentials.
- rdev list for physnet1 vswitch uplink (if available)
List of rdev addresses that should be connected to this vswitch.
- xCAT IP Address on Management Network
IP address of the xCAT management interface.
- Net Mask of Management Network
Net mask of the xCAT management interface.
This plug-in requires to configure access to the VMware NSX service.
Login credentials for the VMware NSX server. The user needs to have administrator permissions on the NSX server.
Enter the IP address and the port number (IP-ADDRESS:PORT) of the controller API endpoint. If the port number is omitted, port 443 will be used. You may also enter multiple API endpoints (comma-separated), provided they all belong to the same controller cluster. When multiple API endpoints are specified, the plugin will load balance requests on the various API endpoints.
The UUIDs for the transport zone and the gateway service can be obtained from the NSX server. They will be used when networks are created.
The neutron component consists of two different roles:
This service runs the various agents that manage the network traffic of all the cloud instances. It acts as the DHCP and DNS server and as a gateway for all cloud instances. It is recommend to deploy this role on a dedicated node supplied with sufficient network capacity.
12.10.1 Using Infoblox IPAM Plug-in #
In the neutron barclamp, you can enable support for the infoblox IPAM
plug-in and configure it. For configuration, the
infoblox
section contains the subsections
grids
and grid_defaults
.
- grids
This subsection must contain at least one entry. For each entry, the following parameters are required:
admin_user_name
admin_password
grid_master_host
grid_master_name
data_center_name
You can also add multiple entries to the
grids
section. However, the upstream infoblox agent only supports a single grid currently.- grid_defaults
This subsection contains the default settings that are used for each grid (unless you have configured specific settings within the
grids
section).
For detailed information on all infoblox-related configuration settings, see https://github.com/openstack/networking-infoblox/blob/master/doc/source/installation.rst.
Currently, all configuration options for infoblox are only available in the
raw
mode of the neutron barclamp. To enable support for
the infoblox IPAM plug-in and configure it, proceed as follows:
Click
and search for the following section:"use_infoblox": false,
To enable support for the infoblox IPAM plug-in, change this entry to:
"use_infoblox": true,
In the
grids
section, configure at least one grid by replacing the example values for each parameter with real values.If you need specific settings for a grid, add some of the parameters from the
grid_defaults
section to the respective grid entry and adjust their values.Otherwise Crowbar applies the default setting to each grid when you save the barclamp proposal.
Save your changes and apply them.
12.10.2 HA Setup for neutron #
neutron can be made highly available by deploying
and on a cluster. While may be deployed on a cluster shared with other services, it is strongly recommended to use a dedicated cluster solely for the role.12.10.3 Setting Up Multiple External Networks #
This section shows you how to create external networks on SUSE OpenStack Cloud.
12.10.3.1 New Network Configurations #
If you have not yet deployed Crowbar, add the following configuration to
/etc/crowbar/network.json
to set up an external network, using the name of your new network, VLAN ID, and network addresses. If you have already deployed Crowbar, then add this configuration to the view of the Network Barclamp."public2": { "conduit": "intf1", "vlan": 600, "use_vlan": true, "add_bridge": false, "subnet": "192.168.135.128", "netmask": "255.255.255.128", "broadcast": "192.168.135.255", "ranges": { "host": { "start": "192.168.135.129", "end": "192.168.135.254" } } },
Modify the additional_external_networks in the
view of the neutron Barclamp with the name of your new external network.Apply both barclamps, and it may also be necessary to re-apply the nova Barclamp.
Then follow the steps in the next section to create the new external network.
12.10.3.2 Create the New External Network #
The following steps add the network settings, including IP address pools, gateway, routing, and virtual switches to your new network.
Set up interface mapping using either Open vSwitch (OVS) or Linuxbridge. For Open vSwitch run the following command:
openstack network create public2 --provider:network_type flat \ --provider:physical_network public2 --router:external=True
For Linuxbridge run the following command:
openstack network create --router:external True --provider:physical_network physnet1 \ --provider:network_type vlan --provider:segmentation_id 600
If a different network is used then Crowbar will create a new interface mapping. Then you can use a flat network:
openstack network create public2 --provider:network_type flat \ --provider:physical_network public2 --router:external=True
Create a subnet:
openstack subnet create --name public2 --allocation-pool \ start=192.168.135.2,end=192.168.135.127 --gateway 192.168.135.1 public2 \ 192.168.135.0/24 --enable_dhcp False
Create a router, router2:
openstack router create router2
Connect router2 to the new external network:
openstack router set router2 public2
Create a new private network and connect it to router2
openstack network create priv-net openstack subnet create priv-net --gateway 10.10.10.1 10.10.10.0/24 \ --name priv-net-sub openstack router add subnet router2 priv-net-sub
Boot a VM on priv-net-sub and set a security group that allows SSH.
Assign a floating IP address to the VM, this time from network public2.
From the node verify that SSH is working by opening an SSH session to the VM.
12.10.3.3 How the Network Bridges are Created #
For OVS, a new bridge will be created by Crowbar, in this case
br-public2
. In the bridge mapping the new network will
be assigned to the bridge. The interface specified in
/etc/crowbar/network.json
(in this case eth0.600) will
be plugged into br-public2
. The new public network can
be created in neutron using the new public network name as
provider:physical_network.
For Linuxbridge, Crowbar will check the interface associated with public2. If this is the same as physnet1 no interface mapping will be created. The new public network can be created in neutron using physnet1 as physical network and specifying the correct VLAN ID:
openstack network create public2 --router:external True \ --provider:physical_network physnet1 --provider:network_type vlan \ --provider:segmentation_id 600
A bridge named brq-NET_ID
will be created and the
interface specified in /etc/crowbar/network.json
will
be plugged into it. If a new interface is associated in
/etc/crowbar/network.json
with
public2 then Crowbar will add a new interface
mapping and the second public network can be created using
public2 as the physical network:
openstack network create public2 --provider:network_type flat \ --provider:physical_network public2 --router:external=True
12.11 Deploying nova #
nova provides key services for managing the SUSE OpenStack Cloud, sets up the Compute Nodes. SUSE OpenStack Cloud currently supports KVM and VMware vSphere. The unsupported QEMU option is included to enable test setups with virtualized nodes. The following attributes can be configured for nova:
Set the “overcommit ratio” for RAM for instances on the Compute Nodes. A ratio of
1.0
means no overcommitment. Changing this value is not recommended.Set the “overcommit ratio” for CPUs for instances on the Compute Nodes. A ratio of
1.0
means no overcommitment.Set the “overcommit ratio” for virtual disks for instances on the Compute Nodes. A ratio of
1.0
means no overcommitment.Amount of reserved host memory that is not used for allocating VMs by
nova-compute
.Allows to move KVM instances to a different Compute Node running the same hypervisor (cross hypervisor migrations are not supported). Useful when a Compute Node needs to be shut down or rebooted for maintenance or when the load of the Compute Node is very high. Instances can be moved while running (Live Migration).
Warning: Libvirt Migration and SecurityEnabling the libvirt migration option will open a TCP port on the Compute Nodes that allows access to all instances from all machines in the admin network. Ensure that only authorized machines have access to the admin network when enabling this option.
Tip: Specifying Network for Live MigrationIt is possible to change a network to live migrate images. This is done in the raw view of the nova barclamp. In the
migration
section, change thenetwork
attribute to the appropriate value (for example,storage
for Ceph).Kernel SamePage Merging (KSM) is a Linux Kernel feature which merges identical memory pages from multiple running processes into one memory region. Enabling it optimizes memory usage on the Compute Nodes when using the KVM hypervisor at the cost of slightly increasing CPU usage.
- SSL Support: Protocol
Choose whether to encrypt public communication (SSL Support: Protocol for configuration details.
) or not ( ). If choosing ,refer to- VNC Settings: NoVNC Protocol
After having started an instance you can display its VNC console in the OpenStack Dashboard (horizon) via the browser using the noVNC implementation. By default this connection is not encrypted and can potentially be eavesdropped.
Enable encrypted communication for noVNC by choosing
and providing the locations for the certificate key pair files.Shows debugging output in the log files when set to
.
You can pass custom vendor data to all VMs via nova's metadata server. For example, information about a custom SMT server can be used by the SUSE guest images to automatically configure the repositories for the guest.
To pass custom vendor data, switch to the
view of the nova barclamp.Search for the following section:
"metadata": { "vendordata": { "json": "{}" } }
As value of the
json
entry, enter valid JSON data. For example:"metadata": { "vendordata": { "json": "{\"CUSTOM_KEY\": \"CUSTOM_VALUE\"}" } }
The string needs to be escaped because the barclamp file is in JSON format, too.
Use the following command to access the custom vendor data from inside a VM:
curl -s http://METADATA_SERVER/openstack/latest/vendor_data.json
The IP address of the metadata server is always the same from within a VM. For more details, see https://www.suse.com/communities/blog/vms-get-access-metadata-neutron/.
The nova component consists of eight different roles:
Distributing and scheduling the instances is managed by the
. It also provides networking and messaging services. needs to be installed on a Control Node.- / / /
Provides the hypervisors (KVM, QEMU, VMware vSphere, and z/VM) and tools needed to manage the instances. Only one hypervisor can be deployed on a single compute node. To use different hypervisors in your cloud, deploy different hypervisors to different Compute Nodes. A
nova-compute-*
role needs to be installed on every Compute Node. However, not all hypervisors need to be deployed.Each image that will be made available in SUSE OpenStack Cloud to start an instance is bound to a hypervisor. Each hypervisor can be deployed on multiple Compute Nodes (except for the VMware vSphere role, see below). In a multi-hypervisor deployment you should make sure to deploy the
nova-compute-*
roles in a way, that enough compute power is available for each hypervisor.Note: Re-assigning HypervisorsExisting
nova-compute-*
nodes can be changed in a production SUSE OpenStack Cloud without service interruption. You need to “evacuate” the node, re-assign a newnova-compute
role via the nova barclamp and the change. can only be deployed on a single node.
When deploying a
node with the ML2 driver enabled in the neutron barclamp, the following new attributes are also available in the section of the mode: (the name of the DVS switch configured on the target vCenter cluster) and (enable or disable implementing security groups through DVS traffic rules).It is important to specify the correct
value, as the barclamp expects the DVS switch to be preconfigured on the target VMware vCenter cluster.Deploying
nodes will not result in a functional cloud setup if the ML2 plugin is not enabled in the neutron barclamp.12.11.1 HA Setup for nova #
Making
highly available requires no special configuration—it is sufficient to deploy it on a cluster.To enable High Availability for Compute Nodes, deploy the following roles to one or more clusters with remote nodes:
nova-compute-kvm
nova-compute-qemu
ec2-api
The cluster to which you deploy the roles above can be completely
independent of the one to which the role nova-controller
is deployed.
However, the nova-controller
and
ec2-api
roles must be deployed the same way (either
both to a cluster or both to
individual nodes. This is due to Crowbar design limitations.
It is recommended to use shared storage for the
/var/lib/nova/instances
directory, to ensure that
ephemeral disks will be preserved during recovery of VMs from failed
compute nodes. Without shared storage, any ephemeral disks will be lost,
and recovery will rebuild the VM from its original image.
If an external NFS server is used, enable the following option in the nova barclamp proposal:
.12.12 Deploying horizon (OpenStack Dashboard) #
The last component that needs to be deployed is horizon, the OpenStack Dashboard. It provides a Web interface for users to start and stop instances and for administrators to manage users, groups, roles, etc. horizon should be installed on a Control Node. To make horizon highly available, deploy it on a cluster.
The following attributes can be configured:
- Session Timeout
Timeout (in minutes) after which a user is been logged out automatically. The default value is set to four hours (240 minutes).
Note: Timeouts Larger than Four HoursEvery horizon session requires a valid keystone token. These tokens also have a lifetime of four hours (14400 seconds). Setting the horizon session timeout to a value larger than 240 will therefore have no effect, and you will receive a warning when applying the barclamp.
To successfully apply a timeout larger than four hours, you first need to adjust the keystone token expiration accordingly. To do so, open the keystone barclamp in
mode and adjust the value of the keytoken_expiration
. Note that the value has to be provided in seconds. When the change is successfully applied, you can adjust the horizon session timeout (in minutes). Note that extending the keystone token expiration may cause scalability issues in large and very busy SUSE OpenStack Cloud installations.Specify a regular expression with which to check the password. The default expression (
.{8,}
) tests for a minimum length of 8 characters. The string you enter is interpreted as a Python regular expression (see http://docs.python.org/2.7/library/re.html#module-re for a reference).Error message that will be displayed in case the password validation fails.
- SSL Support: Protocol
Choose whether to encrypt public communication (
) or not ( ). If choosing , you have two choices. You can either or provide the locations for the certificate key pair files and,—optionally— the certificate chain file. Using self-signed certificates is for testing purposes only and should never be used in production environments!
12.12.1 HA Setup for horizon #
Making horizon highly available requires no special configuration—it is sufficient to deploy it on a cluster.
12.13 Deploying heat (Optional) #
heat is a template-based orchestration engine that enables you to, for example, start workloads requiring multiple servers or to automatically restart instances if needed. It also brings auto-scaling to SUSE OpenStack Cloud by automatically starting additional instances if certain criteria are met. For more information about heat refer to the OpenStack documentation at http://docs.openstack.org/developer/heat/.
heat should be deployed on a Control Node. To make heat highly available, deploy it on a cluster.
The following attributes can be configured for heat:
Shows debugging output in the log files when set to
.- SSL Support: Protocol
Choose whether to encrypt public communication (SSL Support: Protocol for configuration details.
) or not ( ). If choosing , refer to
12.13.1 Enabling Identity Trusts Authorization (Optional) #
heat uses keystone Trusts to delegate a subset of user roles to the
heat engine for deferred operations (see
Steve
Hardy's blog for details). It can either delegate all user roles or
only those specified in the trusts_delegated_roles
setting. Consequently, all roles listed in
trusts_delegated_roles
need to be assigned to a user,
otherwise the user will not be able to use heat.
The recommended setting for trusts_delegated_roles
is
member
, since this is the default role most users are
likely to have. This is also the default setting when installing SUSE OpenStack Cloud
from scratch.
On installations where this setting is introduced through an upgrade,
trusts_delegated_roles
will be set to
heat_stack_owner
. This is a conservative choice to
prevent breakage in situations where unprivileged users may already have
been assigned the heat_stack_owner
role to enable them
to use heat but lack the member
role. As long as you can
ensure that all users who have the heat_stack_owner
role
also have the member
role, it is both safe and
recommended to change trusts_delegated_roles to member
.
If the Octavia barclamp is deployed, the trusts_delegated_roles
configuration option either needs to be set to an empty value, or the
load-balancer_member
role needs to be included, otherwise
it won't be possible to create Octavia load balancers via heat stacks.
Refer to the Section 12.20.3, “Migrating Users to Octavia” section for more
details on the list of specialized roles employed by Octavia.
Also note that adding the load-balancer_member
role
to the trusts_delegated_roles
list has the undesired
side effect that only users that have this role assigned to them will be
allowed to access the Heat API, as covered previously in this section.
To view or change the trusts_delegated_role setting you need to open the
heat barclamp and click trusts_delegated_roles
setting and modify the list of
roles as desired.
An empty value for trusts_delegated_roles
will delegate
all of user roles to heat. This may create a security
risk for users who are assigned privileged roles, such as
admin
, because these privileged roles will also be
delegated to the heat engine when these users create heat stacks.
12.13.2 HA Setup for heat #
Making heat highly available requires no special configuration—it is sufficient to deploy it on a cluster.
12.14 Deploying ceilometer (Optional) #
ceilometer collects CPU and networking data from SUSE OpenStack Cloud. This data can be used by a billing system to enable customer billing. Deploying ceilometer is optional. ceilometer agents use monasca database to store collected data.
For more information about ceilometer refer to the OpenStack documentation at http://docs.openstack.org/developer/ceilometer/.
As of SUSE OpenStack Cloud Crowbar 8 data measuring is only supported for KVM and Windows instances. Other hypervisors and SUSE OpenStack Cloud features such as object or block storage will not be measured.
The following attributes can be configured for ceilometer:
- Intervals used for OpenStack Compute, Image, or Block Storage meter updates (in seconds)
Specify intervals in seconds after which ceilometer performs updates of specified meters.
- How long are metering samples kept in the database (in days)
Specify how long to keep the metering data.
-1
means that samples are kept in the database forever.- How long are event samples kept in the database (in days)
Specify how long to keep the event data.
-1
means that samples are kept in the database forever.
The ceilometer component consists of four different roles:
The notification agent.
The polling agent listens to the message bus to collect data. It needs to be deployed on a Control Node. It can be deployed on the same node as
.The compute agents collect data from the compute nodes. They need to be deployed on all KVM compute nodes in your cloud (other hypervisors are currently not supported).
An agent collecting data from the swift nodes. This role needs to be deployed on the same node as swift-proxy.
12.14.1 HA Setup for ceilometer #
Making ceilometer highly available requires no special configuration—it is sufficient to deploy the roles
and on a cluster.12.15 Deploying manila #
manila provides coordinated access to shared or distributed file systems, similar to what cinder does for block storage. These file systems can be shared between instances in SUSE OpenStack Cloud.
manila uses different back-ends. As of SUSE OpenStack Cloud Crowbar 8 currently supported back-ends include , , and . Two more back-end options, and are available for testing purposes and are not supported.
manila uses some CephFS features that are currently not supported by the SUSE Linux Enterprise Server 12 SP4 CephFS kernel client:
RADOS namespaces
MDS path restrictions
Quotas
As a result, to access CephFS shares provisioned by manila, you must use ceph-fuse. For details, see http://docs.openstack.org/developer/manila/devref/cephfs_native_driver.html.
When first opening the manila barclamp, the default proposal
is already available for configuration. To replace it, first delete it by clicking the trashcan icon and then choose a different back-end in the section . Select a and—optionally—provide a . Activate the back-end with . Note that at least one back-end must be configured.The attributes that can be set to configure cinder depend on the back-end:
The generic driver is included as a technology preview and is not supported.
Provide the name of the Enterprise Virtual Server that the selected back-end is assigned to.
IP address for mounting shares.
Provide a file-system name for creating shares.
IP address of the HNAS management interface for communication between manila controller and HNAS.
HNAS username Base64 String required to perform tasks like creating file-systems and network interfaces.
HNAS user password. Required only if private key is not provided.
RSA/DSA private key necessary for connecting to HNAS. Required only if password is not provided.
Time in seconds to wait before aborting stalled HNAS jobs.
Host name of the Virtual Storage Server.
The name or IP address for the storage controller or the cluster.
The port to use for communication. Port 80 is usually used for HTTP, 443 for HTTPS.
Login credentials.
Transport protocol for communicating with the storage controller or cluster. Supported protocols are HTTP and HTTPS. Choose the protocol your NetApp is licensed for.
- Use Ceph deployed by Crowbar
Set to
true
to use Ceph deployed with Crowbar.
Lets you manually pick and configure a driver. Only use this option for testing purposes, it is not supported.
The manila component consists of two different roles:
The manila server provides the scheduler and the API. Installing it on a Control Node is recommended.
The shared storage service. It can be installed on a Control Node, but it is recommended to deploy it on one or more dedicated nodes supplied with sufficient disk space and networking capacity, since it will generate a lot of network traffic.
12.15.1 HA Setup for manila #
While the
role can be deployed on a cluster, deploying on a cluster is not supported. Therefore it is generally recommended to deploy on several nodes—this ensures the service continues to be available even when a node fails.12.16 Deploying Tempest (Optional) #
Tempest is an integration test suite for SUSE OpenStack Cloud written in Python. It contains multiple integration tests for validating your SUSE OpenStack Cloud deployment. For more information about Tempest refer to the OpenStack documentation at http://docs.openstack.org/developer/tempest/.
Tempest is only included as a technology preview and not supported.
Tempest may be used for testing whether the intended setup will run without problems. It should not be used in a production environment.
Tempest should be deployed on a Control Node.
The following attributes can be configured for Tempest:
Credentials for a regular user. If the user does not exist, it will be created.
Tenant to be used by Tempest. If it does not exist, it will be created. It is safe to stick with the default value.
Credentials for an admin user. If the user does not exist, it will be created.
To run tests with Tempest, log in to the Control Node on which
Tempest was deployed. Change into the directory
/var/lib/openstack-tempest-test
. To get an overview of
available commands, run:
./tempest --help
To serially invoke a subset of all tests (“the gating
smoketests”) to help validate the working functionality of your
local cloud instance, run the following command. It will save the output to
a log file
tempest_CURRENT_DATE.log
.
./tempest run --smoke --serial 2>&1 \ | tee "tempest_$(date +%Y-%m-%d_%H%M%S).log"
12.16.1 HA Setup for Tempest #
Tempest cannot be made highly available.
12.17 Deploying Magnum (Optional) #
Magnum is an OpenStack project which offers container orchestration engines for deploying and managing containers as first class resources in OpenStack.
For more information about Magnum, see the OpenStack documentation at http://docs.openstack.org/developer/magnum/.
For information on how to deploy a Kubernetes cluster (either from command line or from the horizon Dashboard), see the Supplement to Administrator Guide and User Guide. It is available from https://documentation.suse.com/soc/9/.
The following
can be configured for Magnum:- :
Deploying Kubernetes clusters in a cloud without an Internet connection requires the
registry_enabled
option in its cluster template set totrue
. To make this offline scenario work, you also need to set the option totrue
. This restores the old, insecure behavior for clusters with theregistry-enabled
orvolume_driver=Rexray
options enabled.- :
Domain name to use for creating trustee for bays.
- :
Increases the amount of information that is written to the log files when set to
.- :
Shows debugging output in the log files when set to
.- :
To store certificates, either use the OpenStack service, a local directory ( ), or the .
Note: barbican As Certificate ManagerIf you choose to use barbican for managing certificates, make sure that the barbican barclamp is enabled.
The Magnum barclamp consists of the following roles: Section 12.17.1, “HA Setup for Magnum”. When deploying the role onto a Control Node, additional RAM is required for the Magnum server. It is recommended to only deploy the role to a Control Node that has 16 GB RAM.
. It can either be deployed on a Control Node or on a cluster—see12.17.1 HA Setup for Magnum #
Making Magnum highly available requires no special configuration. It is sufficient to deploy it on a cluster.
12.18 Deploying barbican (Optional) #
barbican is a component designed for storing secrets in a secure and standardized manner protected by keystone authentication. Secrets include SSL certificates and passwords used by various OpenStack components.
barbican settings can be configured in Raw
mode
only. To do this, open the barbican barclamp configuration in mode.
When configuring barbican, pay particular attention to the following settings:
bind_host
Bind host for the barbican API servicebind_port
Bind port for the barbican API serviceprocesses
Number of API processes to run in Apachessl
Enable or disable SSLthreads
Number of API worker threadsdebug
Enable or disable debug loggingenable_keystone_listener
Enable or disable the keystone listener serviceskek
An encryption key (fixed-length 32-byte Base64-encoded value) for barbican'ssimple_crypto
plugin. If left unspecified, the key will be generated automatically.Note: Existing Encryption KeyIf you plan to restore and use the existing barbican database after a full reinstall (including a complete wipe of the Crowbar node), make sure to save the specified encryption key beforehand. You will need to provide it after the full reinstall in order to access the data in the restored barbican database.
- SSL Support: Protocol
With the default value Section 2.3, “SSL Encryption” for background information and Section 11.4.6, “Enabling SSL” for installation instructions. The following additional configuration options will become available when choosing :
, public communication will not be encrypted. Choose to use SSL for encryption. SeeWhen set to
true
, self-signed certificates are automatically generated and copied to the correct locations. This setting is for testing purposes only and should never be used in production environments!- /
Location of the certificate key pair files.
Set this option to
true
when using self-signed certificates to disable certificate checks. This setting is for testing purposes only and should never be used in production environments!Specify the absolute path to the CA certificate. This field is mandatory, and leaving it blank will cause the barclamp to fail. To fix this issue, you have to provide the absolute path to the CA certificate, restart the
apache2
service, and re-deploy the barclamp.When the certificate is not already trusted by the pre-installed list of trusted root certificate authorities, you need to provide a certificate bundle that includes the root and all intermediate CAs.
Figure 12.30: The SSL Dialog #
12.18.1 HA Setup for barbican #
To make barbican highly available, assign the
role to the Controller Cluster.12.19 Deploying sahara #
sahara provides users with simple means to provision data processing frameworks (such as Hadoop, Spark, and Storm) on OpenStack. This is accomplished by specifying configuration parameters such as the framework version, cluster topology, node hardware details, etc.
- Logging: Verbose
Set to
true
to increase the amount of information written to the log files.
12.19.1 HA Setup for sahara #
Making sahara highly available requires no special configuration. It is sufficient to deploy it on a cluster.
12.20 Deploying Octavia #
SUSE OpenStack Cloud Crowbar 9 provides Octavia Load Balancing as a Service (LBaaS). It is used to manage a fleet of virtual machines, containers, or bare metal servers—collectively known as amphorae — which it spins up on demand.
Starting with the SUSE OpenStack Cloud Crowbar 9 release, we recommend running Octavia as a standalone load balancing solution. Neutron LBaaS is deprecated in the OpenStack Queens release, and Octavia is its replacement. Whenever possible, operators are strongly advised to migrate to Octavia. For further information on OpenStack Neutron LBaaS deprecation, refer to https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation.
Deploying the Octavia barclamp does not automatically run all tasks required to complete the migration from Neutron LBaaS.
Please refer to Section 12.20.3, “Migrating Users to Octavia” for instructions on migrating existing users to allow them to access the Octavia load balancer API after the Octavia barclamp is deployed.
Please refer to Section 12.20.4, “Migrating Neutron LBaaS Instances to Octavia” for instructions on migrating existing Neutron LBaaS load balancer instances to Octavia and on disabling the deprecated Neutron LBaaS provider after the Octavia barclamp is deployed.
Octavia consists of the following major components:
- amphorae
Amphorae are the individual virtual machines, containers, or bare metal servers that accomplish the delivery of load balancing services to tenant application environments.
- controller
The controller is the brains of Octavia. It consists of five sub-components as individual daemons. They can be run on separate back-end infrastructure.
The API Controller is a subcomponent that runs Octavia’s API. It takes API requests, performs simple sanitizing on them, and ships them off to the controller worker over the Oslo messaging bus.
The controller worker subcomponent takes sanitized API commands from the API controller and performs the actions necessary to fulfill the API request.
The health manager subcomponent monitors individual amphorae to ensure they are up and running, and healthy. It also handles failover events if amphorae fail unexpectedly.
The housekeeping manager subcomponent cleans up stale (deleted) database records, manages the spares pool, and manages amphora certificate rotation.
The driver agent subcomponent receives status and statistics updates from provider drivers.
- network
Octavia cannot accomplish what it does without manipulating the network environment. Amphorae are spun up with a network interface on the
load balancer network
. They can also plug directly into tenant networks to reach back-end pool members, depending on how any given load balancing service is deployed by the tenant.
The OpenStack Octavia team has created a glossary of terms used within the context of the Octavia project and Neutron LBaaS version 2. This glossary is available here: Octavia Glossary.
In accomplishing its role, Octavia requires OpenStack services managed by other barclamps to be already deployed:
Nova - For managing amphora lifecycle and spinning up compute resources on demand.
Neutron - For network connectivity between amphorae, tenant environments, and external networks.
Barbican - For managing TLS certificates and credentials, when TLS session termination is configured on the amphorae.
Keystone - For authentication against the Octavia API, and for Octavia to authenticate with other OpenStack projects.
Glance - For storing the amphora virtual machine image.
The Octavia barclamp component consists of following roles:
- octavia-api
The Octavia API.
- octavia-backend
Octavia worker, health-manager and house-keeping.
12.20.1 Prerequisites #
Before configuring and applying the Octavia barclamp, there are a couple of prerequisites that have to be prepared: the Neutron management network used by the Octavia control plane services to communicate with Amphorae and the certificates needed to secure this communication.
12.20.1.1 Management network #
Octavia needs a neutron provider network as a management network that the controller uses to communicate with the amphorae. The amphorae that Octavia deploys have interfaces and IP addresses on this network. It’s important that the subnet deployed on this network be sufficiently large to allow for the maximum number of amphorae and controllers likely to be deployed throughout the lifespan of the cloud installation.
To configure the Octavia management network, the network configuration
must be initialized or updated to include an octavia
network entry. The Octavia barclamp uses this information to automatically
create the neutron provider network used for management traffic.
If you have not yet deployed Crowbar, add the following configuration to
/etc/crowbar/network.json
to set up the Octavia management network, using the applicable VLAN ID, and network address values. If you have already deployed Crowbar, then add this configuration to the view of the Network Barclamp."octavia": { "conduit": "intf1", "vlan": 450, "use_vlan": true, "add_bridge": false, "subnet": "172.31.0.0", "netmask": "255.255.0.0", "broadcast": "172.31.255.255", "ranges": { "host": { "start": "172.31.0.1", "end": "172.31.0.255" }, "dhcp": { "start": "172.31.1.1", "end": "172.31.255.254" } } },
ImportantCare should be taken to ensure the IP subnet doesn't overlap with any of those configured for the other networks. The chosen VLAN ID must not be used within the SUSE OpenStack Cloud network and not used by neutron (i.e. if deploying neutron with VLAN support - using the plugins linuxbridge or openvswitch plus VLAN - ensure that the VLAN ID doesn't overlap with the range of VLAN IDs allocated for the
nova-fixed
neutron network).The
host
range will be used to allocate IP addresses to the controller nodes where Octavia services are running, so it needs to accommodate the maximum number of controller nodes likely to be deployed throughout the lifespan of the cloud installation.The
dhcp
range will be reflected in the configuration of the actual neutron provider network used for Octavia management traffic and its size will determine the maximum number of amphorae and therefore the maximum number of load balancer instances that can be running at the same time.See Section 7.5, “Custom Network Configuration” for detailed instructions on how to customize the network configuration.
If Crowbar is already deployed, it is also necessary to re-apply both the neutron Barclamp and the nova Barclamp for the configuration to take effect before applying the Octavia Barclamp.
Aside from configuring the physical switches to allow VLAN traffic to be correctly forwarded, no additional external network configuration is required.
12.20.1.2 Certificates #
Crowbar will automatically change the filesystem ownership settings for these files to match the username and group used by the Octavia services, but it is otherwise the responsibility of the cloud administrator to ensure that access to these files on the controller nodes is properly restricted.
Octavia administrators set up certificate authorities for the
two-way TLS authentication used in Octavia for command and
control of amphorae. For more information, see the
Creating the Certificate Authorities
section
of the Octavia
Certificate Configuration Guide . Note that the Configuring
Octavia
section of that guide does not apply as the
barclamp will configure Octavia.
The following certificates need to be generated and stored on all
controller nodes where Octavia is deployed under
/etc/octavia/certs
, in a relative path matching the
certificate location attribute values configured in the Octavia barclamp:
- Server CA certificate
The Certificate Authority (CA) certificate that is used by the Octavia controller(s) to sign the generated Amphora server certificates. The Octavia control plane services also validate the server certificates presented by Amphorae during the TLS handshake against this CA certificate.
- Server CA key
The private key associated with the server CA certificate. This key must be encrypted with a non-empty passphrase that also needs to be provided as a separate barclamp attribute. The private key is required alongside the server CA certificate on the Octavia controller(s), to sign the generated Amphora server certificates.
- Passphrase
The passphrase used to encrypt the server CA key.
- Client CA certificate
The CA certificate used to sign the client certificates installed on the Octavia controller nodes and presented by Octavia control plane services during the TLS handshake. This CA certificate is stored on the Amphorae, which use it to validate the client certificate presented by the Octavia control plane services during the TLS handshake. The same CA certificate may be used for both client and server roles, but this is perceived as a security weakness and recommended against, as a server certificate from an amphora could be used to impersonate a controller.
- Client certificate concat key
The client certificate, signed with the client CA certificate, bundled together with the client certificate key, that is presented by the Octavia control plane services during the TLS handshake.
All Octavia barclamp attributes listed above, with the exception of the
pasphrase are paths relative to /etc/octavia/certs
.
The required certificates must be present in their corresponding locations
on all controller nodes where the Octavia barclamp will be deployed.
12.20.2 Barclmap raw mode #
If a user wants to be able to debug or get access to an amphora,
you can provide an SSH keyname to the barclamp via the
raw mode
. This is a keyname to a key that has
been uploaded to openstack. For example:
openstack keypair create --public-key /etc/octavia/.ssh/id_rsa_amphora.pub octavia_key
Note that the keypair has to be owned by the octavia user.
12.20.3 Migrating Users to Octavia #
This behaviour is not backwards compatible with the legacy Neutron LBaaS
API policy, as non-admin OpenStack users will not be allowed to run
openstack loadbalancer
CLI commands or use
the load balancer horizon dashboard unless their accounts are explicitly
reconfigured to be associated with one or more of these roles.
Please follow the instructions documented under Section 12.13.1, “Enabling Identity Trusts Authorization (Optional)” on updating the trusts roles in the heat barclamp configuration. This is required to configure heat to use the correct roles when communicating with the Octavia API and manage load balancers.
Octavia employs a set of specialized roles to control access to the load balancer API:
- load-balancer_observer
User has access to load-balancer read-only APIs.
- load-balancer_global_observer
User has access to load-balancer read-only APIs including resources owned by others.
- load-balancer_member
User has access to load-balancer read and write APIs.
- load-balancer_quota_admin
User is considered an admin for quota APIs only.
- load-balancer_admin
User is considered an admin for all load-balancer APIs including resources owned by others.
12.20.4 Migrating Neutron LBaaS Instances to Octavia #
Disabling LBaaS or switching the LBaaS provider in the Neutron barclamp to Octavia is not possible while there are load balancers still running under the previous Neutron LBaaS provider and will result in a Neutron barclamp redeployment failure. To avoid this, ensure that load balancer instances that are running under the old provider are either migrated or deleted.
The migration procedure documented in this section is only relevant if LBaaS was already enabled in the Neutron barclamp, with either the HAProxy or H5 provider configured, before Octavia was deployed. The procedure should be followed by operators to migrate and/or delete all load balancer instances using the Neutron LBaaS provider that are still active, and concluded the switch to Octavia by reconfiguring or disabling the deprecated Neutron LBaaS feature.
Octavia is a replacement for the Neutron LBaaS feature, that is deprecated in the SUSE OpenStack Cloud Crowbar 9 release. However, deploying the Octavia barclamp does not automatically disable the legacy Neutron LBaaS provider, if one is already configured in the Neutron barclamp.
Both Octavia and Neutron LBaaS need to be enabled at the same time, to facilitate the load balancer migration process. This way, operators have a migration path they can use to gradually decommission Neutron LBaaS load balancers that use the HAProxy or F5 provider and replace them with Octavia load balancers.
With Octavia deployed and Neutron LBaaS enabled, both load balancer providers can be used simultaneously:
The (deprecated)
neutron lbaas-...
CLI commands can be used to manage load balancer instances using the legacy Neutron LBaaS provider configured in the Neutron barclamp. Note that the legacy Neutron LBaaS instances will not be visible in the load balancer horizon dashboard.The
openstack loadbalancer
CLI commands as well as the load balancer horizon dashboard can be used to manage Octavia load balancers. Also note that OpenStack users are required to have special roles associated with their projects to be able to access the Octavia API, as covered in Section 12.20.3, “Migrating Users to Octavia”.
(Optional): to prevent regular users from creating or changing the
configuration of currently running legacy Neutron LBaaS load balancer
instances during the migration process, the neutron API policy should be
temporarily changed to prevent these operations. For this purpose, a
neutron-lbaas.json
file can be created in the
/etc/neutron/policy.d
folder on all neutron-server
nodes (no service restart required):
mkdir /etc/neutron/policy.d cat > /etc/neutron/policy.d/neutron-lbaas.json <<EOF { "context_is_admin": "role:admin", "context_is_advsvc": "role:advsvc", "default": "rule:admin_or_owner", "create_loadbalancer": "rule:admin_only", "update_loadbalancer": "rule:admin_only", "get_loadbalancer": "!", "delete_loadbalancer": "rule:admin_only", "create_listener": "rule:admin_only", "get_listener": "", "delete_listener": "rule:admin_only", "update_listener": "rule:admin_only", "create_pool": "rule:admin_only", "get_pool": "", "delete_pool": "rule:admin_only", "update_pool": "rule:admin_only", "create_healthmonitor": "rule:admin_only", "get_healthmonitor": "", "update_healthmonitor": "rule:admin_only", "delete_healthmonitor": "rule:admin_only", "create_pool_member": "rule:admin_only", "get_pool_member": "", "update_pool_member": "rule:admin_only", "delete_pool_member": "rule:admin_only" } EOF chown -R root:neutron /etc/neutron/policy.d chmod 640 /etc/neutron/policy.d/neutron-lbaas.json
If users need to create or change the configuration of currently running
legacy Neutron LBaaS load balancer instances during the migration process,
Create a neutron-lbaas.json
file in the
/etc/neutron/policy.d
folder on all neutron-server nodes.
The neutron-lbaas.json
file should be empty, then
restart the neutron service via systemctl restart openstack-neutron.service
on all neutron-server nodes.
With all of the above in check, the actual migration process consists of replacing Neutron LBaaS instances with Octavia instances. There are many different ways to accomplish this, depending on the size and purpose of the cloud deployment, the number of load balancers that need to be migrated, the project and user configuration etc. This section only gives a few pointers and recommendations on how to approach this tasks, but the actual execution needs to be attuned to each particular situation.
Migrating a single load balancer instance is generally comprised of these steps:
Use the
neutron lbaas-...
CLI to retrieve information about the load balancer configuration, including the complete set of related listener, pool, member and health monitor instancesUse the
openstack loadbalancer
CLI or the load balancer horizon dashboard to create an Octavia load balancer and its associated listener, pool, member and health monitor instances to accurately match the project and Neutron LBaaS load balancer configuration extracted during the previous step. Note that the Octavia load balancer instance and the Neutron LBaaS instance cannot share the same VIP address value if both instances are running at the same time. This could be a problem, if the load balancer VIP address is accessed directly (i.e. as opposed to being accessed via a floating IP). In this case, the legacy load balancer instance needs to be deleted first, which incurs a longer interruption in service availability.Once the Octavia instance is up and running, if a floating IP is associated with the Neutron LBaaS load balancer VIP address, re-associate the floating IP with the Octavia load balancer VIP address. Using a floating IP has the advantage that the migration can be performed with minimal downtime. If the load balancer VIP address needs to be accessed directly (e.g. from another VM attached to the same Neutron network or router), then all the remote affected services need to be reconfigured to use the new VIP address.
The two load balancer instances can continue to run in parallel, while the operator or owner verifies the Octavia load balancer operation. If any problems occur, the change can be reverted by undoing the actions performed during the previous step. If a floating IP is involved, this could be as simple as switching it back to the Neutron LBaaS load balancer instance.
When it's safe, delete the Neutron LBaaS load balancer instance, along with all its related listener, pool, member and health monitor instances.
Depending on the number of load balancer instances that need to be migrated and the complexity of the overall setup that they are integrated into, the migration may be performed by the cloud operators, the owners themselves, or a combination of both. It is generally recommended that the load balancer owners have some involvement in this process or at least be notified of this migration procedure, because the load balancer migration is not an entirely seamless operation. One or more of the load balancer configuration attributes listed below may change during the migration and there may be other operational components, managed by OpenStack or otherwise (e.g. OpenStack heat stacks, configuration management scripts, database entries or non-persistent application states, etc.), that only the owner(s) may be aware of:
The load balancer UUID value, along with the UUID values of every other related object (listeners, pools, members etc.). Even though the name values may be preserved by the migration, the UUID values will be different.
The load balancer VIP address will change during a non-disruptive migration. This is especially relevant if there is no floating IP associated with the previous VIP address.
When the load balancer migration is complete, the Neutron LBaaS provider can either be switched to Octavia or turned off entirely in the Neutron barclamp, to finalize the migration process.
The only advantage of having Octavia configured as the Neutron LBaaS
provider is that it continues to allow users to manage Octavia load
balancers via the deprecated neutron lbaas-...
CLI, but
it is otherwise recommended to disable LBaaS in the Neutron barclamp.
12.21 Deploying ironic (optional) #
Ironic is the OpenStack bare metal service for provisioning physical machines. Refer to the OpenStack developer and admin manual for information on drivers, and administering ironic.
Deploying the ironic barclamp is done in five steps:
Set options in the Custom view of the barclamp.
List the
enabled_drivers
in the Raw view.Configure the ironic network in
network.json
.Apply the barclamp to a Control Node.
Apply the
role to the same node you applied the ironic barclamp to, in place of the other roles.
12.21.1 Custom View Options #
Currently, there are two options in the Custom view of the barclamp.
Node cleaning prepares the node to accept a new workload. When you set this to
, ironic collects a list of cleaning steps from the Power, Deploy, Management, and RAID interfaces of the driver assigned to the node. ironic automatically prioritizes and executes the cleaning steps, and changes the state of the node to "cleaning". When cleaning is complete the state becomes "available". After a new workload is assigned to the machine its state changes to "active". disables automatic cleaning, and you must configure and apply node cleaning manually. This requires the admin to create and prioritize the cleaning steps, and to set up a cleaning network. Apply manual cleaning when you have long-running or destructive tasks that you wish to monitor and control more closely. (SeeSSL support is not yet enabled, so the only option is
.
12.21.2 ironic Drivers #
You must enter the Raw view of barclamp and specify a list of drivers to load during service initialization.
pxe_ipmitool
is the recommended default ironic driver. It uses the
Intelligent Platform Management Interface (IPMI) to control the power state
of your bare metal machines, creates the appropriate PXE configurations
to start them, and then performs the steps to provision and configure the machines.
"enabled_drivers": ["pxe_ipmitool"],
See ironic Drivers for more information.
12.21.3 Example ironic Network Configuration #
This is a complete ironic network.json
example, using
the default network.json
, followed by a diff that shows
the ironic-specific configurations.
{ "start_up_delay": 30, "enable_rx_offloading": true, "enable_tx_offloading": true, "mode": "single", "teaming": { "mode": 1 }, "interface_map": [ { "bus_order": [ "0000:00/0000:00:01", "0000:00/0000:00:03" ], "pattern": "PowerEdge R610" }, { "bus_order": [ "0000:00/0000:00:01.1/0000:01:00.0", "0000:00/0000:00:01.1/0000.01:00.1", "0000:00/0000:00:01.0/0000:02:00.0", "0000:00/0000:00:01.0/0000:02:00.1" ], "pattern": "PowerEdge R620" }, { "bus_order": [ "0000:00/0000:00:01", "0000:00/0000:00:03" ], "pattern": "PowerEdge R710" }, { "bus_order": [ "0000:00/0000:00:04", "0000:00/0000:00:02" ], "pattern": "PowerEdge C6145" }, { "bus_order": [ "0000:00/0000:00:03.0/0000:01:00.0", "0000:00/0000:00:03.0/0000:01:00.1", "0000:00/0000:00:1c.4/0000:06:00.0", "0000:00/0000:00:1c.4/0000:06:00.1" ], "pattern": "PowerEdge R730xd" }, { "bus_order": [ "0000:00/0000:00:1c", "0000:00/0000:00:07", "0000:00/0000:00:09", "0000:00/0000:00:01" ], "pattern": "PowerEdge C2100" }, { "bus_order": [ "0000:00/0000:00:01", "0000:00/0000:00:03", "0000:00/0000:00:07" ], "pattern": "C6100" }, { "bus_order": [ "0000:00/0000:00:01", "0000:00/0000:00:02" ], "pattern": "product" } ], "conduit_map": [ { "conduit_list": { "intf0": { "if_list": [ "1g1", "1g2" ] }, "intf1": { "if_list": [ "1g1", "1g2" ] }, "intf2": { "if_list": [ "1g1", "1g2" ] }, "intf3": { "if_list": [ "1g1", "1g2" ] } }, "pattern": "team/.*/.*" }, { "conduit_list": { "intf0": { "if_list": [ "?1g1" ] }, "intf1": { "if_list": [ "?1g2" ] }, "intf2": { "if_list": [ "?1g1" ] }, "intf3": { "if_list": [ "?1g2" ] } }, "pattern": "dual/.*/.*" }, { "conduit_list": { "intf0": { "if_list": [ "?1g1" ] }, "intf1": { "if_list": [ "?1g1" ] }, "intf2": { "if_list": [ "?1g1" ] }, "intf3": { "if_list": [ "?1g2" ] } }, "pattern": "single/.*/.*ironic.*" }, { "conduit_list": { "intf0": { "if_list": [ "?1g1" ] }, "intf1": { "if_list": [ "?1g1" ] }, "intf2": { "if_list": [ "?1g1" ] }, "intf3": { "if_list": [ "?1g1" ] } }, "pattern": "single/.*/.*" }, { "conduit_list": { "intf0": { "if_list": [ "?1g1" ] }, "intf1": { "if_list": [ "1g1" ] }, "intf2": { "if_list": [ "1g1" ] }, "intf3": { "if_list": [ "1g1" ] } }, "pattern": ".*/.*/.*" }, { "conduit_list": { "intf0": { "if_list": [ "1g1" ] }, "intf1": { "if_list": [ "?1g1" ] }, "intf2": { "if_list": [ "?1g1" ] }, "intf3": { "if_list": [ "?1g1" ] } }, "pattern": "mode/1g_adpt_count/role" } ], "networks": { "ironic": { "conduit": "intf3", "vlan": 100, "use_vlan": false, "add_bridge": false, "add_ovs_bridge": false, "bridge_name": "br-ironic", "subnet": "192.168.128.0", "netmask": "255.255.255.0", "broadcast": "192.168.128.255", "router": "192.168.128.1", "router_pref": 50, "ranges": { "admin": { "start": "192.168.128.10", "end": "192.168.128.11" }, "dhcp": { "start": "192.168.128.21", "end": "192.168.128.254" } }, "mtu": 1500 }, "storage": { "conduit": "intf1", "vlan": 200, "use_vlan": true, "add_bridge": false, "mtu": 1500, "subnet": "192.168.125.0", "netmask": "255.255.255.0", "broadcast": "192.168.125.255", "ranges": { "host": { "start": "192.168.125.10", "end": "192.168.125.239" } } }, "public": { "conduit": "intf1", "vlan": 300, "use_vlan": true, "add_bridge": false, "subnet": "192.168.122.0", "netmask": "255.255.255.0", "broadcast": "192.168.122.255", "router": "192.168.122.1", "router_pref": 5, "ranges": { "host": { "start": "192.168.122.2", "end": "192.168.122.127" } }, "mtu": 1500 }, "nova_fixed": { "conduit": "intf1", "vlan": 500, "use_vlan": true, "add_bridge": false, "add_ovs_bridge": false, "bridge_name": "br-fixed", "subnet": "192.168.123.0", "netmask": "255.255.255.0", "broadcast": "192.168.123.255", "router": "192.168.123.1", "router_pref": 20, "ranges": { "dhcp": { "start": "192.168.123.1", "end": "192.168.123.254" } }, "mtu": 1500 }, "nova_floating": { "conduit": "intf1", "vlan": 300, "use_vlan": true, "add_bridge": false, "add_ovs_bridge": false, "bridge_name": "br-public", "subnet": "192.168.122.128", "netmask": "255.255.255.128", "broadcast": "192.168.122.255", "ranges": { "host": { "start": "192.168.122.129", "end": "192.168.122.254" } }, "mtu": 1500 }, "bmc": { "conduit": "bmc", "vlan": 100, "use_vlan": false, "add_bridge": false, "subnet": "192.168.124.0", "netmask": "255.255.255.0", "broadcast": "192.168.124.255", "ranges": { "host": { "start": "192.168.124.162", "end": "192.168.124.240" } }, "router": "192.168.124.1" }, "bmc_vlan": { "conduit": "intf2", "vlan": 100, "use_vlan": true, "add_bridge": false, "subnet": "192.168.124.0", "netmask": "255.255.255.0", "broadcast": "192.168.124.255", "ranges": { "host": { "start": "192.168.124.161", "end": "192.168.124.161" } } }, "os_sdn": { "conduit": "intf1", "vlan": 400, "use_vlan": true, "add_bridge": false, "mtu": 1500, "subnet": "192.168.130.0", "netmask": "255.255.255.0", "broadcast": "192.168.130.255", "ranges": { "host": { "start": "192.168.130.10", "end": "192.168.130.254" } } }, "admin": { "conduit": "intf0", "vlan": 100, "use_vlan": false, "add_bridge": false, "mtu": 1500, "subnet": "192.168.124.0", "netmask": "255.255.255.0", "broadcast": "192.168.124.255", "router": "192.168.124.1", "router_pref": 10, "ranges": { "admin": { "start": "192.168.124.10", "end": "192.168.124.11" }, "dhcp": { "start": "192.168.124.21", "end": "192.168.124.80" }, "host": { "start": "192.168.124.81", "end": "192.168.124.160" }, "switch": { "start": "192.168.124.241", "end": "192.168.124.250" } } } } }
This diff should help you separate the ironic items from the default
network.json
.
--- network.json 2017-06-07 09:22:38.614557114 +0200 +++ ironic_network.json 2017-06-05 12:01:15.927028019 +0200 @@ -91,6 +91,12 @@ "1g1", "1g2" ] + }, + "intf3": { + "if_list": [ + "1g1", + "1g2" + ] } }, "pattern": "team/.*/.*" @@ -111,6 +117,11 @@ "if_list": [ "?1g1" ] + }, + "intf3": { + "if_list": [ + "?1g2" + ] } }, "pattern": "dual/.*/.*" @@ -131,6 +142,36 @@ "if_list": [ "?1g1" ] + }, + "intf3": { + "if_list": [ + "?1g2" + ] + } + }, + "pattern": "single/.*/.*ironic.*" + }, + { + "conduit_list": { + "intf0": { + "if_list": [ + "?1g1" + ] + }, + "intf1": { + "if_list": [ + "?1g1" + ] + }, + "intf2": { + "if_list": [ + "?1g1" + ] + }, + "intf3": { + "if_list": [ + "?1g1" + ] } }, "pattern": "single/.*/.*" @@ -151,6 +192,11 @@ "if_list": [ "1g1" ] + }, + "intf3": { + "if_list": [ + "1g1" + ] } }, "pattern": ".*/.*/.*" @@ -171,12 +217,41 @@ "if_list": [ "?1g1" ] + }, + "intf3": { + "if_list": [ + "?1g1" + ] } }, "pattern": "mode/1g_adpt_count/role" } ], "networks": { + "ironic": { + "conduit": "intf3", + "vlan": 100, + "use_vlan": false, + "add_bridge": false, + "add_ovs_bridge": false, + "bridge_name": "br-ironic", + "subnet": "192.168.128.0", + "netmask": "255.255.255.0", + "broadcast": "192.168.128.255", + "router": "192.168.128.1", + "router_pref": 50, + "ranges": { + "admin": { + "start": "192.168.128.10", + "end": "192.168.128.11" + }, + "dhcp": { + "start": "192.168.128.21", + "end": "192.168.128.254" + } + }, + "mtu": 1500 + }, "storage": { "conduit": "intf1", "vlan": 200,
12.22 How to Proceed #
With a successful deployment of the OpenStack Dashboard, the SUSE OpenStack Cloud Crowbar installation is finished. To be able to test your setup by starting an instance one last step remains to be done—uploading an image to the glance component. Refer to the Supplement to Administrator Guide and User Guide, chapter Manage images for instructions. Images for SUSE OpenStack Cloud can be built in SUSE Studio. Refer to the Supplement to Administrator Guide and User Guide, section Building Images with SUSE Studio.
Now you can hand over to the cloud administrator to set up users, roles,
flavors, etc.—refer to the Administrator Guide for details. The default
credentials for the OpenStack Dashboard are user name admin
and password crowbar
.
12.23 SUSE Enterprise Storage integration #
SUSE OpenStack Cloud Crowbar supports integration with SUSE Enterprise Storage (SES), enabling Ceph block storage as well as image storage services in SUSE OpenStack Cloud.
Enabling SES Integration #
To enable SES integration on Crowbar, an SES configuration file must be
uploaded to Crowbar. SES integration functionality is included in the
crowbar-core
package and can be used with the Crowbar UI
or CLI (crowbarctl
). The SES configuration file
describes various aspects of the Ceph environment, and keyrings for each
user and pool created in the Ceph environment for SUSE OpenStack Cloud Crowbar services.
SES 7 Configuration #
The following instructions detail integrating SUSE Enterprise Storage 7.0 with SUSE OpenStack Cloud.
Create the osd pools on the SUSE Enterprise Storage admin node (the names provided here are examples)
ceph osd pool create ses-cloud-volumes 16 && \ ceph osd pool create ses-cloud-backups 16 && \ ceph osd pool create ses-cloud-images 16 &&\ ceph osd pool create ses-cloud-vms 16
Enable the osd pools
ceph osd pool application enable ses-cloud-volumes rbd && \ ceph osd pool application enable ses-cloud-backups rbd && \ ceph osd pool application enable ses-cloud-images rbd && \ ceph osd pool application enable ses-cloud-vms rbd
Configure permissions on the SUSE OpenStack Cloud Crowbar admin node
ceph-authtool -C /etc/ceph/ceph.client.ses-cinder.keyring --name client.ses-cinder --add-key $(ceph-authtool --gen-print-key) --cap mon "allow r" --cap osd "allow class-read object_prefix rbd_children, allow rwx pool=ses-cloud-volumes, allow rwx pool=ses-cloud-vms, allow rwx pool=ses-cloud-images" ceph-authtool -C /etc/ceph/ceph.client.ses-cinder-backup.keyring --name client.ses-cinder-backup --add-key $(ceph-authtool --gen-print-key) --cap mon "allow r" --cap osd "allow class-read object_prefix rbd_children, allow rwx pool=ses-cloud-cinder-backups" ceph-authtool -C /etc/ceph/ceph.client.ses-glance.keyring --name client.ses-glance --add-key $(ceph-authtool --gen-print-key) --cap mon "allow r" --cap osd "allow class-read object_prefix rbd_children, allow rwx pool=ses-cloud-images"
Import the updated keyrings into Ceph
ceph auth import -i /etc/ceph/ceph.client.ses-cinder-backup.keyring && \ ceph auth import -i /etc/ceph/ceph.client.ses-cinder.keyring && \ ceph auth import -i /etc/ceph/ceph.client.ses-glance.keyring
SES 6, 5.5, 5 Configuration #
For SES deployments that are version 5.5 or 6, a Salt runner is used to create all the users and pools. It also generates a YAML configuration that is needed to integrate with SUSE OpenStack Cloud. The integration runner creates separate users for cinder, cinder backup (not used by Crowbar currently) and glance. Both the cinder and nova services have the same user, because cinder needs access to create objects that nova uses.
Support for SUSE Enterprise Storage 5 and 5.5 is deprecated. The documentation for integrating these versions is included for customers who may not yet have upgraded to newer versions of SUSE Enterprise Storage . These versions are no longer officially supported.
Configure SES 6, 5.5, or 5 with the following steps:
Login as
root
and run the SES 5.5 Salt runner on the Salt admin host.root #
salt-run --out=yaml openstack.integrate prefix=mycloudThe prefix parameter allows pools to be created with the specified prefix. By using different prefix parameters, multiple cloud deployments can support different users and pools on the same SES deployment.
YAML output is created with content similar to the following example, and can be redirected to a file using the redirect operator
>
or using the additional parameter--out-file=<filename>
:ceph_conf: cluster_network: 10.84.56.0/21 fsid: d5d7c7cb-5858-3218-a36f-d028df7b0673 mon_host: 10.84.56.8, 10.84.56.9, 10.84.56.7 mon_initial_members: ses-osd1, ses-osd2, ses-osd3 public_network: 10.84.56.0/21 cinder: key: ABCDEFGaxefEMxAAW4zp2My/5HjoST2Y87654321== rbd_store_pool: mycloud-cinder rbd_store_user: cinder cinder-backup: key: AQBb8hdbrY2bNRAAqJC2ZzR5Q4yrionh7V5PkQ== rbd_store_pool: mycloud-backups rbd_store_user: cinder-backup glance: key: AQD9eYRachg1NxAAiT6Hw/xYDA1vwSWLItLpgA== rbd_store_pool: mycloud-glance rbd_store_user: glance nova: rbd_store_pool: mycloud-nova radosgw_urls: - http://10.84.56.7:80/swift/v1 - http://10.84.56.8:80/swift/v1
Upload the generated YAML file to Crowbar using the UI or
crowbarctl
CLI.If the Salt runner is not available, you must manually create pools and users to allow SUSE OpenStack Cloud services to use the SES/Ceph cluster. Pools and users must be created for cinder, nova, and glance. Instructions for creating and managing pools, users and keyrings can be found in the SUSE Enterprise Storage Administration Guide in the Key Management section.
After the required pools and users are set up on the SUSE Enterprise Storage/Ceph cluster, create an SES configuration file in YAML format (using the example template above). Upload this file to Crowbar using the UI or
crowbarctl
CLI.As indicated above, the SES configuration file can be uploaded to Crowbar using the UI or
crowbarctl
CLI.From the main Crowbar UI, the upload page is under
› .If a configuration is already stored in Crowbar, it will be visible in the upload page. A newly uploaded configuration will replace existing one. The new configuration will be applied to the cloud on the next
chef-client
run. There is no need to reapply proposals.Configurations can also be deleted from Crowbar. After deleting a configuration, you must manually update and reapply all proposals that used SES integration.
With the
crowbarctl
CLI, the commandcrowbarctl ses upload FILE
accepts a path to the SES configuration file.
Cloud Service Configuration#
SES integration with SUSE OpenStack Cloud services is implemented with relevant Barclamps
and installed with the crowbar-openstack
package.
- glance
Set
Use SES Configuration
totrue
underRADOS Store Parameters
. The glance barclamp pulls the uploaded SES configuration from Crowbar when applying the glance proposal and onchef-client
runs. If the SES configuration is uploaded before the glance proposal is created,Use SES Configuration
is enabled automatically upon proposal creation.- cinder
Create a new RADOS backend and set
Use SES Configuration
totrue
. The cinder barclamp pulls the uploaded SES configuration from Crowbar when applying the cinder proposal and onchef-client
runs. If the SES configuration was uploaded before the cinder proposal was created, ases-ceph
RADOS backend is created automatically on proposal creation withUse SES Configuration
already enabled.- nova
To connect with volumes stores in SES, nova uses the configuration from the cinder barclamp. For ephemeral storage, nova re-uses the
rbd_store_user
andkey
from cinder but has a separaterbd_store_pool
defined in the SES configuration. Ephemeral storage on SES can be enabled or disabled by settingUse Ceph RBD Ephemeral Backend
in nova proposal. In new deployments it is enabled by default. In existing ones it is disabled for compatibility reasons.
RADOS Gateway Integration#
Besides block storage, the SES cluster can also be used as a swift
replacement for object storage. If radosgw_urls
section is present
in uploaded SES configuration, first of the URLs is registered
in the keystone catalog as the "swift"/"object-store" service. Some
configuration is needed on SES side to fully integrate with keystone
auth.
If SES integration is enabled on a cloud with swift deployed,
SES object storage service will get higher priority by default. To
override this and use swift for object storage instead, remove
radosgw_urls
section from the SES configuration file and re-upload it
to Crowbar. Re-apply swift proposal or wait for next periodic
chef-client run to make changes effective.
12.24 Roles and Services in SUSE OpenStack Cloud Crowbar #
The following table lists all roles (as defined in the barclamps), and their
associated services. As of SUSE OpenStack Cloud Crowbar 8, this list is work in
progress. Services can be manually started and stopped with the commands
systemctl start SERVICE
and
systemctl stop SERVICE
.
Role |
Service |
---|---|
ceilometer-agent |
openstack-ceilometer-agent-compute
|
ceilometer-central ceilometer-server ceilometer-swift-proxy-middleware |
|
| |
cinder-controller |
|
| |
cinder-volume |
|
database-server |
|
glance-server |
|
| |
heat-server |
|
| |
| |
| |
horizon |
|
keystone-server |
|
manila-server |
|
| |
manila-share |
|
neutron-server |
|
nova-compute-* |
|
| |
nova-controller |
|
| |
| |
| |
| |
| |
rabbitmq-server |
|
swift-dispersion |
none |
swift-proxy |
|
swift-ring-compute |
none |
swift-storage |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
|
12.25 Crowbar Batch Command #
This is the documentation for the crowbar batch
subcommand.
crowbar batch
provides a quick way of creating, updating,
and applying Crowbar proposals. It can be used to:
Accurately capture the configuration of an existing Crowbar environment.
Drive Crowbar to build a complete new environment from scratch.
Capture one SUSE OpenStack Cloud environment and then reproduce it on another set of hardware (provided hardware and network configuration match to an appropriate extent).
Automatically update existing proposals.
As the name suggests, crowbar batch
is intended to be run
in “batch mode” that is mostly unattended. It has two modes of
operation:
- crowbar batch export
Exports a YAML file which describes existing proposals and how their parameters deviate from the default proposal values for that barclamp.
- crowbar batch build
Imports a YAML file in the same format as above. Uses it to build new proposals if they do not yet exist. Updates the existing proposals so that their parameters match those given in the YAML file.
12.25.1 YAML file format #
Here is an example YAML file. At the top-level there is a proposals array, each entry of which is a hash representing a proposal:
proposals: - barclamp: provisioner # Proposal name defaults to 'default'. attributes: shell_prompt: USER@ALIAS:CWD SUFFIX - barclamp: database # Default attributes are good enough, so we just need to assign # nodes to roles: deployment: elements: database-server: - "@@controller1@@" - barclamp: rabbitmq deployment: elements: rabbitmq-server: - "@@controller1@@"
Note that the characters @
and `
are
reserved indicators in YAML. They can appear anywhere in a string
except at the beginning. Therefore a string such as
@@controller1@@
needs to be quoted using double quotes.
12.25.2 Top-level proposal attributes #
- barclamp
Name of the barclamp for this proposal (required).
- name
Name of this proposal (optional; default is
default
). Inbuild
mode, if the proposal does not already exist, it will be created.- attributes
An optional nested hash containing any attributes for this proposal which deviate from the defaults for the barclamp.
In
export
mode, any attributes set to the default values are excluded to keep the YAML as short and readable as possible.In
build
mode, these attributes are deep-merged with the current values for the proposal. If the proposal did not already exist, batch build will create it first. The attributes are merged with the default values for the barclamp's proposal.- wipe_attributes
An optional array of paths to nested attributes which should be removed from the proposal.
Each path is a period-delimited sequence of attributes; for example
pacemaker.stonith.sbd.nodes
would remove all SBD nodes from the proposal if it already exists. If a path segment contains a period, it should be escaped with a backslash, for examplesegment-one.segment\.two.segment_three
.This removal occurs before the deep merge described above. For example, think of a YAML file which includes a Pacemaker barclamp proposal where the
wipe_attributes
entry containspacemaker.stonith.sbd.nodes
. A batch build with this YAML file ensures that only SBD nodes listed in theattributes sibling
hash are used at the end of the run. In contrast, without thewipe_attributes
entry, the given SBD nodes would be appended to any SBD nodes already defined in the proposal.- deployment
A nested hash defining how and where this proposal should be deployed.
In
build
mode, this hash is deep-merged in the same way as the attributes hash, except that the array of elements for each Chef role is reset to the empty list before the deep merge. This behavior may change in the future.
12.25.3 Node Alias Substitutions #
A string like @@node@@
(where
node is a node alias) will be substituted for
the name of that node, no matter where the string appears in the YAML file.
For example, if controller1
is a Crowbar alias for node
d52-54-02-77-77-02.mycloud.com
, then
@@controller1@@
will be substituted for that host name.
This allows YAML files to be reused across environments.
12.25.4 Options #
In addition to the standard options available to every
crowbar
subcommand (run crowbar batch
--help
for a full list), there are some extra options
specifically for crowbar batch
:
- --include <barclamp[.proposal]>
Only include the barclamp / proposals given.
This option can be repeated multiple times. The inclusion value can either be the name of a barclamp (for example,
pacemaker
) or a specifically named proposal within the barclamp (for example,pacemaker.network_cluster
).If it is specified, then only the barclamp / proposals specified are included in the build or export operation, and all others are ignored.
- --exclude <barclamp[.proposal]>
This option can be repeated multiple times. The exclusion value is the same format as for
--include
. The barclamps / proposals specified are excluded from the build or export operation.- --timeout <seconds>
Change the timeout for Crowbar API calls.
As Chef's run lists grow, some of the later OpenStack barclamp proposals (for example nova, horizon, or heat) can take over 5 or even 10 minutes to apply. Therefore you may need to increase this timeout to 900 seconds in some circumstances.