Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE OpenStack Cloud Crowbar 9 Documentation / Deployment Guide using Crowbar / Setting Up OpenStack Nodes and Services / Deploying the OpenStack Services
Applies to SUSE OpenStack Cloud Crowbar 9

12 Deploying the OpenStack Services

After the nodes are installed and configured you can start deploying the OpenStack components to finalize the installation. The components need to be deployed in a given order, because they depend on one another. The Pacemaker component for an HA setup is the only exception from this rule—it can be set up at any time. However, when deploying SUSE OpenStack Cloud Crowbar from scratch, we recommend deploying the Pacemaker proposal(s) first. Deployment for all components is done from the Crowbar Web interface through recipes, so-called barclamps. (See Section 12.24, “Roles and Services in SUSE OpenStack Cloud Crowbar for a table of all roles and services, and how to start and stop them.)

The components controlling the cloud, including storage management and control components, need to be installed on the Control Node(s) (refer to Section 1.2, “The Control Node(s)” for more information). However, you may not use your Control Node(s) as a compute node or storage host for swift. Do not install the components swift-storage and nova-compute-* on the Control Node(s). These components must be installed on dedicated Storage Nodes and Compute Nodes.

When deploying an HA setup, the Control Nodes are replaced by one or more controller clusters consisting of at least two nodes, and three are recommended. We recommend setting up three separate clusters for data, services, and networking. See Section 2.6, “High Availability” for more information on requirements and recommendations for an HA setup.

The OpenStack components need to be deployed in the following order. For general instructions on how to edit and deploy barclamps, refer to Section 10.3, “Deploying Barclamp Proposals”. Any optional components that you elect to use must be installed in their correct order.

12.1 Deploying designate

designate provides SUSE OpenStack Cloud Crowbar DNS as a Service (DNSaaS). It is used to create and propagate zones and records over the network using pools of DNS servers. Deployment defaults are in place, so not much is required to configure designate. neutron needs additional settings for integration with designate, which are also present in the [designate] section in neutron configuration.

The designate barclamp relies heavily on the DNS barclamp and expects it to be applied without any failures.

Note
Note

In order to deploy designate, at least one node is necessary in the DNS barclamp that is not the admin node. The admin node is not added to the public network. So another node is needed that can be attached to the public network and appear in the designate default pool.

We recommend that DNS services are running in a cluster in highly available deployments where Designate services are running in a cluster. For example, in a typical HA deployment where the controllers are deployed in a 3-node cluster, the DNS barclamp should be applied to all the controllers, in the same manner as Designate.

designate-server role

Installs the designate server packages and configures the mini-dns (mdns) service required by designate.

designate-worker role

Configures a designate worker on the selected nodes. designate uses the workers to distribute its workload.

designate Sink is an optional service and is not configured as part of this barclamp.

designate uses pool(s) over which it can distribute zones and records. Pools can have varied configuration. Any misconfiguration can lead to information leakage.

The designate barclamp creates default Bind9 pool out of the box, which can be modified later as needed. The default Bind9 pool configuration is created by Crowbar on a node with designate-server role in /etc/designate/pools.crowbar.yaml. You can copy this file and edit it according to your requirements. Then provide this configuration to designate using the command:

tux > designate-manage pool update --file /etc/designate/pools.crowbar.yaml

The dns_domain specified in neutron configuration in [designate] section is the default Zone where DNS records for neutron resources are created via neutron-designate integration. If this is desired, you have to create this zone explicitly using the following command:

ardana > openstack zone create < email > < dns_domain >

Editing the designate proposal:

Edit designate Proposal

12.1.1 Using PowerDNS Backend

Designate uses Bind9 backend by default. It is also possible to use PowerDNS backend in addition to, or as an alternative, to Bind9 backend. To do so PowerDNS must be manually deployed as The designate barclamp currently does not provide any facility to automatically install and configure PowerDNS. This section outlines the steps to deploy PowerDNS backend.

Note
Note

If PowerDNS is already deployed, you may skip the Section 12.1.1.1, “Install PowerDNS” section and jump to the Section 12.1.1.2, “Configure Designate To Use PowerDNS Backend” section.

12.1.1.1 Install PowerDNS

Follow these steps to install and configure PowerDNS on a Crowbar node. Keep in mind that PowerDNS must be deployed with MySQL backend.

Note
Note

We recommend that PowerDNS are running in a cluster in highly availability deployments where Designate services are running in a cluster. For example, in a typical HA deployment where the controllers are deployed in a 3-node cluster, PowerDNS should be running on all the controllers, in the same manner as Designate.

  1. Install PowerDNS packages.

    root # zypper install pdns pdns-backend-mysql
  2. Edit /etc/pdns/pdns.conf and provide these options: (See https://doc.powerdns.com/authoritative/settings.html for a complete reference).

    api

    Set it to yes to enable Web service Rest API.

    api-key

    Static Rest API access key. Use a secure random string here.

    launch

    Must set to gmysql to use MySQL backend.

    gmysql-host

    Hostname (i.e. FQDN) or IP address of the MySQL server.

    gmysql-user

    MySQL user which have full access to the PowerDNS database.

    gmysql-password

    Password for the MySQL user.

    gmysql-dbname

    MySQL database name for PowerDNS.

    local-port

    Port number where PowerDNS is listening for upcoming requests.

    setgid

    The group where the PowerDNS process is running under.

    setuid

    The user where the PowerDNS process is running under.

    webserver

    Must set to yes to enable web service RestAPI.

    webserver-address

    Hostname (FQDN) or IP address of the PowerDNS web service.

    webserver-allow-from

    List of IP addresses (IPv4 or IPv6) of the nodes that are permitted to talk to the PowerDNS web service. These must include the IP address of the Designate worker nodes.

    For example:

    api=yes
    api-key=Sfw234sDFw90z
    launch=gmysql
    gmysql-host=mysql.acme.com
    gmysql-user=powerdns
    gmysql-password=SuperSecured123
    gmysql-dbname=powerdns
    local-port=54
    setgid=pdns
    setuid=pdns
    webserver=yes
    webserver-address=192.168.124.83
    webserver-allow-from=0.0.0.0/0,::/0
  3. Login to MySQL from a Crowbar MySQL node and create the PowerDNS database and the user which has full access to the PowerDNS database. Remember, the database name, username, and password must match gmysql-dbname, gmysql-user, and gmysql-password that were specified above respectively.

    For example:

    root # mysql
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 20075
    Server version: 10.2.29-MariaDB-log SUSE package
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]> CREATE DATABASE powerdns;
    Query OK, 1 row affected (0.01 sec)
    
    MariaDB [(none)]> GRANT ALL ON powerdns.* TO 'powerdns'@'localhost' IDENTIFIED BY 'SuperSecured123';
    Query OK, 0 rows affected (0.00 sec)
    
    MariaDB [(none)]> GRANT ALL ON powerdns.* TO 'powerdns'@'192.168.124.83' IDENTIFIED BY 'SuperSecured123';
    Query OK, 0 rows affected, 1 warning (0.02 sec)
    
    MariaDB [(none)]> FLUSH PRIVILEGES;
    Query OK, 0 rows affected (0.01 sec)
    
    MariaDB [(none)]> exit
    Bye
  4. Create a MySQL schema file, named powerdns-schema.sql, with the following content:

    /*
     SQL statements to create tables in designate_pdns DB.
     Note: This file is taken as is from:
     https://raw.githubusercontent.com/openstack/designate/master/devstack/designate_plugins/backend-pdns4-mysql-db.sql
    */
    CREATE TABLE domains (
      id                    INT AUTO_INCREMENT,
      name                  VARCHAR(255) NOT NULL,
      master                VARCHAR(128) DEFAULT NULL,
      last_check            INT DEFAULT NULL,
      type                  VARCHAR(6) NOT NULL,
      notified_serial       INT DEFAULT NULL,
      account               VARCHAR(40) DEFAULT NULL,
      PRIMARY KEY (id)
    ) Engine=InnoDB;
    
    CREATE UNIQUE INDEX name_index ON domains(name);
    
    
    CREATE TABLE records (
      id                    INT AUTO_INCREMENT,
      domain_id             INT DEFAULT NULL,
      name                  VARCHAR(255) DEFAULT NULL,
      type                  VARCHAR(10) DEFAULT NULL,
      -- Changed to "TEXT", as VARCHAR(65000) is too big for most MySQL installs
      content               TEXT DEFAULT NULL,
      ttl                   INT DEFAULT NULL,
      prio                  INT DEFAULT NULL,
      change_date           INT DEFAULT NULL,
      disabled              TINYINT(1) DEFAULT 0,
      ordername             VARCHAR(255) BINARY DEFAULT NULL,
      auth                  TINYINT(1) DEFAULT 1,
      PRIMARY KEY (id)
    ) Engine=InnoDB;
    
    CREATE INDEX nametype_index ON records(name,type);
    CREATE INDEX domain_id ON records(domain_id);
    CREATE INDEX recordorder ON records (domain_id, ordername);
    
    
    CREATE TABLE supermasters (
      ip                    VARCHAR(64) NOT NULL,
      nameserver            VARCHAR(255) NOT NULL,
      account               VARCHAR(40) NOT NULL,
      PRIMARY KEY (ip, nameserver)
    ) Engine=InnoDB;
    
    
    CREATE TABLE comments (
      id                    INT AUTO_INCREMENT,
      domain_id             INT NOT NULL,
      name                  VARCHAR(255) NOT NULL,
      type                  VARCHAR(10) NOT NULL,
      modified_at           INT NOT NULL,
      account               VARCHAR(40) NOT NULL,
      -- Changed to "TEXT", as VARCHAR(65000) is too big for most MySQL installs
      comment               TEXT NOT NULL,
      PRIMARY KEY (id)
    ) Engine=InnoDB;
    
    CREATE INDEX comments_domain_id_idx ON comments (domain_id);
    CREATE INDEX comments_name_type_idx ON comments (name, type);
    CREATE INDEX comments_order_idx ON comments (domain_id, modified_at);
    
    
    CREATE TABLE domainmetadata (
      id                    INT AUTO_INCREMENT,
      domain_id             INT NOT NULL,
      kind                  VARCHAR(32),
      content               TEXT,
      PRIMARY KEY (id)
    ) Engine=InnoDB;
    
    CREATE INDEX domainmetadata_idx ON domainmetadata (domain_id, kind);
    
    
    CREATE TABLE cryptokeys (
      id                    INT AUTO_INCREMENT,
      domain_id             INT NOT NULL,
      flags                 INT NOT NULL,
      active                BOOL,
      content               TEXT,
      PRIMARY KEY(id)
    ) Engine=InnoDB;
    
    CREATE INDEX domainidindex ON cryptokeys(domain_id);
    
    
    CREATE TABLE tsigkeys (
      id                    INT AUTO_INCREMENT,
      name                  VARCHAR(255),
      algorithm             VARCHAR(50),
      secret                VARCHAR(255),
      PRIMARY KEY (id)
    ) Engine=InnoDB;
    
    CREATE UNIQUE INDEX namealgoindex ON tsigkeys(name, algorithm);
  5. Create the PowerDNS schema for the database using mysql CLI. For example:

    root # mysql powerdns < powerdns-schema.sql
  6. Enable pdns systemd service.

    root # systemctl enable pdns
    root # systemctl start pdns

    If pdns is successfully running, you should see the following logs by running journalctl -u pdns command.

    Feb 07 01:44:12 d52-54-77-77-01-01 systemd[1]: Started PowerDNS Authoritative Server.
    Feb 07 01:44:12 d52-54-77-77-01-01 pdns_server[21285]: Done launching threads, ready to distribute questions

12.1.1.2 Configure Designate To Use PowerDNS Backend

Configure Designate to use PowerDNS backend by appending the PowerDNS servers to /etc/designate/pools.crowbar.yaml file on a Designate worker node.

Note
Note

If we are replacing Bind9 backend with PowerDNS backend, make sure to remove the bind9 entries from /etc/designate/pools.crowbar.yaml.

In HA deployment, there should be multiple PowerDNS entries.

Also, make sure the api_token matches the api-key that was specified in the /etc/pdns/pdns.conf file earlier.

Append the PowerDNS entries to the end of /etc/designate/pools.crowbar.yaml. For example:

---
- name: default-bind
  description: Default BIND9 Pool
  id: 794ccc2c-d751-44fe-b57f-8894c9f5c842
  attributes: {}
  ns_records:
  - hostname: public-d52-54-77-77-01-01.virtual.cloud.suse.de.
    priority: 1
  - hostname: public-d52-54-77-77-01-02.virtual.cloud.suse.de.
    priority: 1
  nameservers:
  - host: 192.168.124.83
    port: 53
  - host: 192.168.124.81
    port: 53
  also_notifies: []
  targets:
  - type: bind9
    description: BIND9 Server
    masters:
    - host: 192.168.124.83
      port: 5354
    - host: 192.168.124.82
      port: 5354
    - host: 192.168.124.81
      port: 5354
    options:
      host: 192.168.124.83
      port: 53
      rndc_host: 192.168.124.83
      rndc_port: 953
      rndc_key_file: "/etc/designate/rndc.key"
  - type: bind9
    description: BIND9 Server
    masters:
    - host: 192.168.124.83
      port: 5354
    - host: 192.168.124.82
      port: 5354
    - host: 192.168.124.81
      port: 5354
    options:
      host: 192.168.124.81
      port: 53
      rndc_host: 192.168.124.81
      rndc_port: 953
      rndc_key_file: "/etc/designate/rndc.key"
  - type: pdns4
    description: PowerDNS4 DNS Server
    masters:
      - host: 192.168.124.83
        port: 5354
      - host: 192.168.124.82
        port: 5354
      - host: 192.168.124.81
        port: 5354
    options:
      host: 192.168.124.83
      port: 54
      api_endpoint: http://192.168.124.83:8081
      api_token: Sfw234sDFw90z

Update the pools using designate-manage CLI.

tux > designate-manage pool update --file /etc/designate/pools.crowbar.yaml

Once Designate sync up with PowerDNS, you should see the domains in the PowerDNS database which reflects the zones in Designate.

Note
Note

It make take a few minutes for Designate to sync with PowerDNS.

We can verify that the domains are successfully sync up with Designate by inpsecting the domains table in the database. For example:

root # mysql powerdns
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 21131
Server version: 10.2.29-MariaDB-log SUSE package

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [powerdns]> select * from domains;
+----+---------+--------------------------------------------------------------+------------+-------+-----------------+---------+
| id | name    | master                                                       | last_check | type  | notified_serial | account |
+----+---------+--------------------------------------------------------------+------------+-------+-----------------+---------+
|  1 | foo.bar | 192.168.124.81:5354 192.168.124.82:5354 192.168.124.83:5354  |       NULL | SLAVE |            NULL |         |
+----+---------+--------------------------------------------------------------+------------+-------+-----------------+---------+
1 row in set (0.00 sec)

12.2 Deploying Pacemaker (Optional, HA Setup Only)

To make the SUSE OpenStack Cloud controller functions and the Compute Nodes highly available, set up one or more clusters by deploying Pacemaker (see Section 2.6, “High Availability” for details). Since it is possible (and recommended) to deploy more than one cluster, a separate proposal needs to be created for each cluster.

Deploying Pacemaker is optional. In case you do not want to deploy it, skip this section and start the node deployment by deploying the database as described in Section 12.3, “Deploying the Database”.

Note
Note: Number of Cluster Nodes

To set up a cluster, at least two nodes are required. See Section 2.6.5, “Cluster Requirements and Recommendations” for more information.

To create a proposal, go to Barclamps › OpenStack and click Edit for the Pacemaker barclamp. A drop-down box where you can enter a name and a description for the proposal opens. Click Create to open the configuration screen for the proposal.

Create Pacemaker Proposal
Important
Important: Proposal Name

The name you enter for the proposal will be used to generate host names for the virtual IP addresses of HAProxy. By default, the names follow this scheme:

cluster-PROPOSAL_NAME.FQDN (for the internal name)
public-cluster-PROPOSAL_NAME.FQDN (for the public name)

For example, when PROPOSAL_NAME is set to data, this results in the following names:

cluster-data.example.com
public-cluster-data.example.com

For requirements regarding SSL encryption and certificates, see Section 2.3, “SSL Encryption”.

The following options are configurable in the Pacemaker configuration screen:

Transport for Communication

Choose a technology used for cluster communication. You can choose between Multicast (UDP), sending a message to multiple destinations, or Unicast (UDPU), sending a message to a single destination. By default unicast is used.

Policy when cluster does not have quorum

Whenever communication fails between one or more nodes and the rest of the cluster a cluster partition occurs. The nodes of a cluster are split in partitions but are still active. They can only communicate with nodes in the same partition and are unaware of the separated nodes. The cluster partition that has the majority of nodes is defined to have quorum.

This configuration option defines what to do with the cluster partition(s) that do not have the quorum. See https://documentation.suse.com/sle-ha/15-SP1/single-html/SLE-HA-guide/#sec-conf-hawk2-cluster-config, for details.

The recommended setting is to choose Stop. However, Ignore is enforced for two-node clusters to ensure that the remaining node continues to operate normally in case the other node fails. For clusters using shared resources, choosing freeze may be used to ensure that these resources continue to be available.

STONITH: Configuration mode for STONITH

Misbehaving nodes in a cluster are shut down to prevent them from causing trouble. This mechanism is called STONITH (Shoot the other node in the head). STONITH can be configured in a variety of ways, refer to https://documentation.suse.com/sle-ha/15-SP1/single-html/SLE-HA-guide/#cha-ha-fencing for details. The following configuration options exist:

Configured manually

STONITH will not be configured when deploying the barclamp. It needs to be configured manually as described in https://documentation.suse.com/sle-ha/15-SP1/single-html/SLE-HA-guide/#cha-ha-fencing. For experts only.

Configured with IPMI data from the IPMI barclamp

Using this option automatically sets up STONITH with data received from the IPMI barclamp. Being able to use this option requires that IPMI is configured for all cluster nodes. This should be done by default. To check or change the IPMI deployment, go to Barclamps › Crowbar › IPMI › Edit. Also make sure the Enable BMC option is set to true on this barclamp.

Important
Important: STONITH Devices Must Support IPMI

To configure STONITH with the IPMI data, all STONITH devices must support IPMI. Problems with this setup may occur with IPMI implementations that are not strictly standards compliant. In this case it is recommended to set up STONITH with STONITH block devices (SBD).

Configured with STONITH Block Devices (SBD)

This option requires manually setting up shared storage and a watchdog on the cluster nodes before applying the proposal. To do so, proceed as follows:

  1. Prepare the shared storage. The path to the shared storage device must be persistent and consistent across all nodes in the cluster. The SBD device must not use host-based RAID or cLVM2.

  2. Install the package sbd on all cluster nodes.

  3. Initialize the SBD device with by running the following command. Make sure to replace /dev/SBD with the path to the shared storage device.

    sbd -d /dev/SBD create

    Refer to https://documentation.suse.com/sle-ha/15-SP1/single-html/SLE-HA-guide/#sec-ha-storage-protect-test for details.

In Kernel module for watchdog, specify the respective kernel module to be used. Find the most commonly used watchdog drivers in the following table:

HardwareDriver
HPhpwdt
Dell, Fujitsu, Lenovo (Intel TCO)iTCO_wdt
Genericsoftdog

If your hardware is not listed above, either ask your hardware vendor for the right name or check the following directory for a list of choices: /lib/modules/KERNEL_VERSION/kernel/drivers/watchdog.

Alternatively, list the drivers that have been installed with your kernel version:

root # rpm -ql kernel-VERSION | grep watchdog

If the nodes need different watchdog modules, leave the text box empty.

After the shared storage has been set up, specify the path using the by-id notation (/dev/disk/by-id/DEVICE). It is possible to specify multiple paths as a comma-separated list.

Deploying the barclamp will automatically complete the SBD setup on the cluster nodes by starting the SBD daemon and configuring the fencing resource.

Configured with one shared resource for the whole cluster

All nodes will use the identical configuration. Specify the Fencing Agent to use and enter Parameters for the agent.

To get a list of STONITH devices which are supported by the High Availability Extension, run the following command on an already installed cluster nodes: stonith -L. The list of parameters depends on the respective agent. To view a list of parameters use the following command:

stonith -t agent -n
Configured with one resource per node

All nodes in the cluster use the same Fencing Agent, but can be configured with different parameters. This setup is, for example, required when nodes are in different chassis and therefore need different IPMI parameters.

To get a list of STONITH devices which are supported by the High Availability Extension, run the following command on an already installed cluster nodes: stonith -L. The list of parameters depends on the respective agent. To view a list of parameters use the following command:

stonith -t agent -n
Configured for nodes running in libvirt

Use this setting for completely virtualized test installations. This option is not supported.

STONITH: Do not start corosync on boot after fencing

With STONITH, Pacemaker clusters with two nodes may sometimes hit an issue known as STONITH deathmatch where each node kills the other one, resulting in both nodes rebooting all the time. Another similar issue in Pacemaker clusters is the fencing loop, where a reboot caused by STONITH will not be enough to fix a node and it will be fenced again and again.

This setting can be used to limit these issues. When set to true, a node that has not been properly shut down or rebooted will not start the services for Pacemaker on boot. Instead, the node will wait for action from the SUSE OpenStack Cloud operator. When set to false, the services for Pacemaker will always be started on boot. The Automatic value is used to have the most appropriate value automatically picked: it will be true for two-node clusters (to avoid STONITH deathmatches), and false otherwise.

When a node boots but not starts corosync because of this setting, then the node's status is in the Node Dashboard is set to "Problem" (red dot).

Mail Notifications: Enable Mail Notifications

Get notified of cluster node failures via e-mail. If set to true, you need to specify which SMTP Server to use, a prefix for the mails' subject and sender and recipient addresses. Note that the SMTP server must be accessible by the cluster nodes.

HAProxy: Public name for public virtual IP

The public name is the host name that will be used instead of the generated public name (see Important: Proposal Name) for the public virtual IP address of HAProxy. (This is the case when registering public endpoints, for example). Any name specified here needs to be resolved by a name server placed outside of the SUSE OpenStack Cloud network.

The Pacemaker Barclamp
Figure 12.1: The Pacemaker Barclamp

The Pacemaker component consists of the following roles. Deploying the hawk-server role is optional:

pacemaker-cluster-member

Deploy this role on all nodes that should become member of the cluster.

hawk-server

Deploying this role is optional. If deployed, sets up the Hawk Web interface which lets you monitor the status of the cluster. The Web interface can be accessed via https://IP-ADDRESS:7630. The default hawk credentials are username hacluster, password crowbar.

The password is visible and editable in the Custom view of the Pacemaker barclamp, and also in the "corosync": section of the Raw view.

Note that the GUI on SUSE OpenStack Cloud can only be used to monitor the cluster status and not to change its configuration.

hawk-server may be deployed on at least one cluster node. It is recommended to deploy it on all cluster nodes.

pacemaker-remote

Deploy this role on all nodes that should become members of the Compute Nodes cluster. They will run as Pacemaker remote nodes that are controlled by the cluster, but do not affect quorum. Instead of the complete cluster stack, only the pacemaker-remote component will be installed on this nodes.

The Pacemaker Barclamp: Node Deployment Example
Figure 12.2: The Pacemaker Barclamp: Node Deployment Example

After a cluster has been successfully deployed, it is listed under Available Clusters in the Deployment section and can be used for role deployment like a regular node.

Warning
Warning: Deploying Roles on Single Cluster Nodes

When using clusters, roles from other barclamps must never be deployed to single nodes that are already part of a cluster. The only exceptions from this rule are the following roles:

  • cinder-volume

  • swift-proxy + swift-dispersion

  • swift-ring-compute

  • swift-storage

Important
Important: Service Management on the Cluster

After a role has been deployed on a cluster, its services are managed by the HA software. You must never manually start or stop an HA-managed service, nor configure it to start on boot. Services may only be started or stopped by using the cluster management tools Hawk or the crm shell. See https://documentation.suse.com/sle-ha/15-SP1/single-html/SLE-HA-guide/#sec-ha-config-basics-resources for more information.

Note
Note: Testing the Cluster Setup

To check whether all cluster resources are running, either use the Hawk Web interface or run the command crm_mon -1r. If it is not the case, clean up the respective resource with crm resource cleanup RESOURCE , so it gets respawned.

Also make sure that STONITH correctly works before continuing with the SUSE OpenStack Cloud setup. This is especially important when having chosen a STONITH configuration requiring manual setup. To test if STONITH works, log in to a node on the cluster and run the following command:

pkill -9 corosync

In case STONITH is correctly configured, the node will reboot.

Before testing on a production cluster, plan a maintenance window in case issues should arise.

12.3 Deploying the Database

The very first service that needs to be deployed is the Database. The database component is using MariaDB and is used by all other components. It must be installed on a Control Node. The Database can be made highly available by deploying it on a cluster.

The only attribute you may change is the maximum number of database connections (Global Connection Limit). The default value should usually work—only change it for large deployments in case the log files show database connection failures.

The Database Barclamp
Figure 12.3: The Database Barclamp

12.3.1 Deploying MariaDB

Deploying the database requires the use of MariaDB

Note
Note: MariaDB and HA

MariaDB back end features full HA support based on the Galera clustering technology. The HA setup requires an odd number of nodes. The recommended number of nodes is 3.

12.3.1.1 SSL Configuration

SSL can be enabled with either a stand-alone or cluster deployment. The replication traffic between database nodes is not encrypted, whilst traffic between the database server(s) and clients are, so a separate network for the database servers is recommended.

Certificates can be provided, or the barcamp can generate self-signed certificates. The certificate filenames are configurable in the barclamp, and the directories /etc/mysql/ssl/certs and /etc/mysql/ssl/private to use the defaults will need to be created before the barclamp is applied. The CA certificate and the certificate for MariaDB to use both go into /etc/mysql/ssl/certs. The appropriate private key for the certificate is placed into the /etc/mysql/ssl/private directory. As long as the files are readable when the barclamp is deployed, permissions can be tightened after a successful deployment once the appropriate UNIX groups exist.

The Common Name (CN) for the SSL certificate must be fully qualified server name for single host deployments, and cluster-cluster name.full domain name for cluster deployments.

Note
Note: Certificate validation errors

If certificate validation errors are causing issues with deploying other barclamps (for example, when creating databases or users) you can check the configuration with mysql --ssl-verify-server-cert which will perform the same verification that Crowbar does when connecting to the database server.

If certificates are supplied, the CA certificate and its full trust chain must be in the ca.pem file. The certificate must be trusted by the machine (or all cluster members in a cluster deployment), and it must be available on all client machines — IE, if the OpenStack services are deployed on separate machines or cluster members they will all require the CA certificate to be in /etc/mysql/ssl/certs as well as trusted by the machine.

12.3.1.2 MariaDB Configuration Options

MariaDB Configuration
Figure 12.4: MariaDB Configuration

The following configuration settings are available via the Database barclamp graphical interface:

Datadir

Path to a directory for storing database data.

Maximum Number of Simultaneous Connections

The maximum number of simultaneous client connections.

Number of days after the binary logs can be automatically removed

A period after which the binary logs are removed.

Slow Query Logging

When enabled, all queries that take longer than usual to execute are logged to a separate log file (by default, it's /var/log/mysql/mysql_slow.log). This can be useful for debugging.

Warning
Warning: MariaDB Deployment Restriction

When MariaDB is used as the database back end, the monasca-server role cannot be deployed to the node with the database-server role. These two roles cannot coexist due to the fact that monasca uses its own MariaDB instance.

12.4 Deploying RabbitMQ

The RabbitMQ messaging system enables services to communicate with the other nodes via Advanced Message Queue Protocol (AMQP). Deploying it is mandatory. RabbitMQ needs to be installed on a Control Node. RabbitMQ can be made highly available by deploying it on a cluster. We recommend not changing the default values of the proposal's attributes.

Virtual Host

Name of the default virtual host to be created and used by the RabbitMQ server (default_vhost configuration option in rabbitmq.config).

Port

Port the RabbitMQ server listens on (tcp_listeners configuration option in rabbitmq.config).

User

RabbitMQ default user (default_user configuration option in rabbitmq.config).

The RabbitMQ Barclamp
Figure 12.5: The RabbitMQ Barclamp

12.4.1 HA Setup for RabbitMQ

To make RabbitMQ highly available, deploy it on a cluster instead of a single Control Node. This also requires shared storage for the cluster that hosts the RabbitMQ data. We recommend using a dedicated cluster to deploy RabbitMQ together with the database, since both components require shared storage.

Deploying RabbitMQ on a cluster makes an additional High Availability section available in the Attributes section of the proposal. Configure the Storage Mode in this section.

12.4.2 SSL Configuration for RabbitMQ

The RabbitMQ barclamp supports securing traffic via SSL. This is similar to the SSL support in other barclamps, but with these differences:

  • RabbitMQ can listen on two ports at the same time, typically port 5672 for unsecured and port 5671 for secured traffic.

  • The ceilometer pipeline for OpenStack swift cannot be passed SSL-related parameters. When SSL is enabled for RabbitMQ the ceilometer pipeline in swift is turned off, rather than sending it over an unsecured channel.

The following steps are the fastest way to set up and test a new SSL certificate authority (CA).

  1. In the RabbitMQ barclamp set Enable SSL to true, and Generate (self-signed) certificates (implies insecure) to true, then apply the barclamp. The barclamp will create a new CA, enter the correct settings in /etc/rabbitmq/rabbitmq.config, and start RabbitMQ.

  2. Test your new CA with OpenSSL, substituting the hostname of your control node:

    openssl s_client -connect d52-54-00-59-e5-fd:5671
    [...]
    Verify return code: 18 (self signed certificate)

    This outputs a lot of information, including a copy of the server's public certificate, protocols, ciphers, and the chain of trust.

  3. The last step is to configure client services to use SSL to access the RabbitMQ service. (See https://www.rabbitmq.com/ssl.html for a complete reference).

It is preferable to set up your own CA. The best practice is to use a commercial certificate authority. You may also deploy your own self-signed certificates, provided that your cloud is not publicly-accessible, and only for your internal use. Follow these steps to enable your own CA in RabbitMQ and deploy it to SUSE OpenStack Cloud:

  • Configure the RabbitMQ barclamp to use the control node's certificate authority (CA), if it already has one, or create a CA specifically for RabbitMQ and configure the barclamp to use that. (See Section 2.3, “SSL Encryption”, and the RabbitMQ manual has a detailed howto on creating your CA at http://www.rabbitmq.com/ssl.html, with customizations for .NET and Java clients.)

    Example RabbitMQ SSL barclamp configuration
    Figure 12.6: SSL Settings for RabbitMQ Barclamp

The configuration options in the RabbitMQ barclamp allow tailoring the barclamp to your SSL setup.

Enable SSL

Set this to True to expose all of your configuration options.

SSL Port

RabbitMQ's SSL listening port. The default is 5671.

Generate (self-signed) certificates (implies insecure)

When this is set to true, self-signed certificates are automatically generated and copied to the correct locations on the control node, and all other barclamp options are set automatically. This is the fastest way to apply and test the barclamp. Do not use this on production systems. When this is set to false the remaining options are exposed.

SSL Certificate File

The location of your public root CA certificate.

SSL (Private) Key File

The location of your private server key.

Require Client Certificate

This goes with SSL CA Certificates File. Set to true to require clients to present SSL certificates to RabbitMQ.

SSL CA Certificates File

Trust client certificates presented by the clients that are signed by other CAs. You'll need to store copies of the CA certificates; see "Trust the Client's Root CA" at http://www.rabbitmq.com/ssl.html.

SSL Certificate is insecure (for instance, self-signed)

When this is set to false, clients validate the RabbitMQ server certificate with the SSL client CA file.

SSL client CA file (used to validate rabbitmq server certificate)

Tells clients of RabbitMQ where to find the CA bundle that validates the certificate presented by the RabbitMQ server, when SSL Certificate is insecure (for instance, self-signed) is set to false.

12.4.3 Configuring Clients to Send Notifications

RabbitMQ has an option called Configure clients to send notifications. It defaults to false, which means no events will be sent. It is required to be set to true for ceilometer, monasca, and any other services consuming notifications. When it is set to true, OpenStack services are configured to submit lifecycle audit events to the notification RabbitMQ queue.

This option should only be enabled if an active consumer is configured, otherwise events will accumulate on the RabbitMQ server, clogging up CPU, memory, and disk storage.

Any accumulation can be cleared by running:

$ rabbitmqctl -p /openstack purge_queue notifications.info
$ rabbitmqctl -p /openstack purge_queue notifications.error

12.5 Deploying keystone

keystone is another core component that is used by all other OpenStack components. It provides authentication and authorization services. keystone needs to be installed on a Control Node. keystone can be made highly available by deploying it on a cluster. You can configure the following parameters of this barclamp:

Algorithm for Token Generation

Set the algorithm used by keystone to generate the tokens. You can choose between Fernet (the default) or UUID. Note that for performance and security reasons it is strongly recommended to use Fernet.

Region Name

Allows customizing the region name that crowbar is going to manage.

Default Credentials: Default Tenant

Tenant for the users. Do not change the default value of openstack.

Default Credentials: Administrator User Name/Password

User name and password for the administrator.

Default Credentials: Create Regular User

Specify whether a regular user should be created automatically. Not recommended in most scenarios, especially in an LDAP environment.

Default Credentials: Regular User Username/Password

User name and password for the regular user. Both the regular user and the administrator accounts can be used to log in to the SUSE OpenStack Cloud Dashboard. However, only the administrator can manage keystone users and access.

The keystone Barclamp
Figure 12.7: The keystone Barclamp
SSL Support: Protocol

When you use the default value HTTP, public communication will not be encrypted. Choose HTTPS to use SSL for encryption. See Section 2.3, “SSL Encryption” for background information and Section 11.4.6, “Enabling SSL” for installation instructions. The following additional configuration options will become available when choosing HTTPS:

Generate (self-signed) certificates

When set to true, self-signed certificates are automatically generated and copied to the correct locations. This setting is for testing purposes only and should never be used in production environments!

SSL Certificate File / SSL (Private) Key File

Location of the certificate key pair files.

SSL Certificate is insecure

Set this option to true when using self-signed certificates to disable certificate checks. This setting is for testing purposes only and should never be used in production environments!

SSL CA Certificates File

Specify the absolute path to the CA certificate. This field is mandatory, and leaving it blank will cause the barclamp to fail. To fix this issue, you have to provide the absolute path to the CA certificate, restart the apache2 service, and re-deploy the barclamp.

When the certificate is not already trusted by the pre-installed list of trusted root certificate authorities, you need to provide a certificate bundle that includes the root and all intermediate CAs.

The SSL Dialog
Figure 12.8: The SSL Dialog

12.5.1 Authenticating with LDAP

keystone has the ability to separate identity back-ends by domains. SUSE OpenStack Cloud 9 uses this method for authenticating users.

The keystone barclamp sets up a MariaDB database by default. Configuring an LDAP back-end is done in the Raw view.

  1. Set "domain_specific_drivers": true,

  2. Then in the "domain_specific_config": section configure a map with domain names as keys, and configuration as values. In the default proposal the domain name key is "ldap_users", and the keys are the two required sections for an LDAP-based identity driver configuration, the [identity] section which sets the driver, and the [ldap] section which sets the LDAP connection options. You may configure multiple domains, each with its own configuration.

You may make this available to horizon by setting multi_domain_support to true in the horizon barclamp.

Users in the LDAP-backed domain have to know the name of the domain in order to authenticate, and must use the keystone v3 API endpoint. (See the OpenStack manuals, Domain-specific Configuration and Integrate Identity with LDAP, for additional details.)

12.5.2 HA Setup for keystone

Making keystone highly available requires no special configuration—it is sufficient to deploy it on a cluster.

12.5.3 OpenID Connect Setup for keystone

keystone supports WebSSO by federating with an external identity provider (IdP) using auth_openidc module.

There are two steps to enable this feature:

  1. Configure the "federation" and "openidc" attributes for the Keystone Barclamp in Crowbar.

  2. Create the Identity Provider, Protocol, and Mapping resource in keystone using OpenStack Command Line Tool (CLI).

12.5.3.1 keystone Barclamp Configuration

Configurating OpenID Connect is done in the Raw view, under the federation section. The global attributes, namely trusted_dashboards and websso_keystone_url are not specific to OpenID Connect. Rather, they are designed to help facilitate WebSSO browser redirects with external IdPs in a complex cloud deployment environment.

If the cloud deployment does not have any external proxies or load balancers, where the public keystone and horizon (Dashboard service) endpoints are directly managed by Crowbar, trusted_dashboards and websso_keystone_url does not need to be provided. However, in a complex cloud deployment where the public Keystone and Horizon endpoints are handled by external load balancers or proxies, and they are not directly managed by Crowbar, trusted_dashboards and websso_keystone_url must be provided and they must correctly reflect the external public endpoints.

To configure OpenID Connect, edit the attributes in the openidc subsection.

  1. Set "enabled":true

  2. Provide the name for the identity_provider. This must be the same as the identity provider to be created in Keystone using the OpenStack Command Line Tool (CLI). For example, if the identity provider is foo, create the identity provider with the name via Openstack CLI (i.e. openstack identity provider create foo).

  3. response_type corresponds to auth_openidc OIDCResponseType. In most cases, it should be id_token.

  4. scope corresponds to auth_openidc OIDCScope.

  5. metadata_url corresponds to auth_openidc OIDCProviderMetadataURL.

  6. client_id corresponds to auth_openidc OIDCClientID.

  7. client_secret corresponds to auth_openidc OIDCClientSecret.

  8. redirect_uri corresponds to auth_openidc OIDCRedirectURI. In a cloud deployment where all the external endpoints are directly managed by Crowbar, this attribute can be left blank as it will be auto-populated by Crowbar. However, in a complex cloud deployment where the public Keystone endpoint is handled by an external load balancer or proxy, this attribute must reflect the external Keystone auth endpoint for the OpenID Connect IdP. For example, "https://keystone-public-endpoint.foo.com/v3/OS-FEDERATION/identity_providers/foo/protocols/openid/auth"

    Warning
    Warning

    Some OpenID Connect IdPs such as Google require the hostname in the redirect_uri to be a public FQDN. In that case, the hostname in Keystone public endpoint must also be a public FQDN and must match the one specified in the redirect_uri.

12.5.3.2 Create Identity Provider, Protocol, and Mapping

To fully enable OpenID Connect, the Identity Provider, Protocol, and Mapping for the given IdP must be created in Keystone. This is done by using the OpenStack Command Line Tool, on a controller node, and using the Keystone admin credential.

  1. Login to a controller node as root user.

  2. Use the Keystone admin credential.

    source ~/.openrc
  3. Create the Identity Provider. For example:

    openstack identity provider create foo
    Warning
    Warning

    The name of the Identity Provider must be exactly the same as the identity_provider attribute given when configuring Keystone in the previous section.

  4. Next, create the Mapping for the Identity Provider. Prior to creating the Mapping, one must fully grasp the intricacies of Mapping Combinations as it may have profound security implications if done incorrectly. Here's an example of a mapping file.

    [
        {
            "local": [
                {
                    "user": {
                        "name": "{0}",
                        "email": "{1}",
                        "type": "ephemeral"
                     },
                     "group": {
                        "domain": {
                            "name": "Default"
                        },
                        "name": "openidc_demo"
                    }
                 }
             ],
             "remote": [
                 {
                     "type": "REMOTE_USER"
                 },
                 {
                     "type": "HTTP_OIDC_EMAIL"
                 }
    
            ]
        }
    ]

    Once the mapping file is created, now create the mapping resource in Keystone. For example:

    openstack mapping create --rule oidc_mapping.json oidc_mapping
  5. Lastly, create the Protocol for the Identity Provider and its mapping. For OpenID Connect, the protocol name must be openid. For example:

    openstack federation protocol create --identity-provider google --mapping oidc_mapping openid

12.6 Deploying monasca (Optional)

monasca is an open-source monitoring-as-a-service solution that integrates with OpenStack. monasca is designed for scalability, high performance, and fault tolerance.

Accessing the Raw interface is not required for day-to-day operation. But as not all monasca settings are exposed in the barclamp graphical interface (for example, various performance tuneables), it is recommended to configure monasca in the Raw mode. Below are the options that can be configured via the Raw interface of the monasca barclamp.

The monasca barclamp Raw Mode
Figure 12.9: The monasca barclamp Raw Mode

agent: settings for openstack-monasca-agent

keystone

Contains keystone credentials that the agents use to send metrics. Do not change these options, as they are configured by Crowbar.

insecure

Specifies whether SSL certificates are verified when communicating with keystone. If set to false, the ca_file option must be specified.

ca_file

Specifies the location of a CA certificate that is used for verifying keystone's SSL certificate.

log_dir

Path for storing log files. The specified path must exist. Do not change the default /var/log/monasca-agent path.

log_level

Agent's log level. Limits log messages to the specified level and above. The following levels are available: Error, Warning, Info (default), and Debug.

check_frequency

Interval in seconds between running agents' checks.

num_collector_threads

Number of simultaneous collector threads to run. This refers to the maximum number of different collector plug-ins (for example, http_check) that are allowed to run simultaneously. The default value 1 means that plug-ins are run sequentially.

pool_full_max_retries

If a problem with the results from multiple plug-ins results blocks the entire thread pool (as specified by the num_collector_threads parameter), the collector exits, so it can be restarted by the supervisord. The parameter pool_full_max_retries specifies when this event occurs. The collector exits when the defined number of consecutive collection cycles have ended with the thread pool completely full.

plugin_collect_time_warn

Upper limit in seconds for any collection plug-in's run time. A warning is logged if a plug-in runs longer than the specified limit.

max_measurement_buffer_size

Maximum number of measurements to buffer locally if the monasca API is unreachable. Measurements will be dropped in batches, if the API is still unreachable after the specified number of messages are buffered. The default -1 value indicates unlimited buffering. Note that a large buffer increases the agent's memory usage.

backlog_send_rate

Maximum number of measurements to send when the local measurement buffer is flushed.

amplifier

Number of extra dimensions to add to metrics sent to the monasca API. This option is intended for load testing purposes only. Do not enable the option in production! The default 0 value disables the addition of dimensions.

log_agent: settings for openstack-monasca-log-agent

max_data_size_kb

Maximum payload size in kilobytes for a request sent to the monasca log API.

num_of_logs

Maximum number of log entries the log agent sends to the monasca log API in a single request. Reducing the number increases performance.

elapsed_time_sec

Time interval in seconds between sending logs to the monasca log API.

delay

Interval in seconds for checking whether elapsed_time_sec has been reached.

keystone

keystone credentials the log agents use to send logs to the monasca log API. Do not change this option manually, as it is configured by Crowbar.

api: Settings for openstack-monasca-api

bind_host

Interfaces monasca-api listens on. Do not change this option, as it is configured by Crowbar.

processes

Number of processes to spawn.

threads

Number of WSGI worker threads to spawn.

log_level

Log level for openstack-monasca-api. Limits log messages to the specified level and above. The following levels are available: Critical, Error, Warning, Info (default), Debug, and Trace.

elasticsearch: server-side settings for elasticsearch

repo_dir

List of directories for storing Elasticsearch snapshots. Must be created manually and be writeable by the elasticsearch user. Must contain at least one entry in order for the snapshot functionality to work.

heap_size

Sets the heap size. We recommend setting heap size at 50% of the available memory, but not more than 31 GB. The default of 4 GB is likely too small and should be increased if possible.

limit_memlock

The maximum size that may be locked into memory in bytes

limit_nofile

The maximum number of open file descriptors

limit_nproc

The maximum number of processes

vm_max_map_count

The maximum number of memory map areas a process may have.

For instructions on creating an Elasticsearch snapshot, see Section 4.7.4, “Backup and Recovery”.

elasticsearch_curator: settings for elastisearch-curator

elasticsearch-curator removes old and large elasticsearch indices. The settings below determine its behavior.

delete_after_days

Time threshold for deleting indices. Indices older the specified number of days are deleted. This parameter is unset by default, so indices are kept indefinitely.

delete_after_size

Maximum size in megabytes of indices. Indices larger than the specified size are deleted. This parameter is unset by default, so indices are kept irrespective of their size.

delete_exclude_index

List of indices to exclude from elasticsearch-curator runs. By default, only the .kibana files are excluded.

kafka: tunables for Kafka

log_retention_hours

Number of hours for retaining log segments in Kafka's on-disk log. Messages older than the specified value are dropped.

log_retention_bytes

Maximum size for Kafka's on-disk log in bytes. If the log grows beyond this size, the oldest log segments are dropped.

topics

list of topics

  • metrics

  • events

  • alarm-state-transitions

  • alarm-notifications

  • retry-notifications

  • 60-seconds-notifications

  • log

  • transformed-log

The following are options of every topic:

replicas

Controls how many servers replicate each message that is written

partitions

Controls how many logs the topic is sharded into

config_options

Map of configuration options is described in the Apache Kafka documentation

These parameters only affect first time installations. Parameters may be changed after installation with scripts available from Apache Kafka.

Kafka does not support reducing the number of partitions for a topic.

notification:

email_enabled

Enable or disable email alarm notifications.

email_smtp_host

SMTP smarthost for sending alarm notifications.

email_smtp_port

Port for the SMTP smarthost.

email_smtp_user

User name for authenticating against the smarthost.

email_smtp_password

Password for authenticating against the smarthost.

email_smtp_from_address

Sender address for alarm notifications.

master: configuration for monasca-installer on the Crowbar node

influxdb_retention_policy

Number of days to keep metrics records in influxdb.

For an overview of all supported values, see https://docs.influxdata.com/influxdb/v1.1/query_language/database_management/#create-retention-policies-with-create-retention-policy.

monasca: settings for libvirt and Ceph monitoring

monitor_libvirt

The global switch for toggling libvirt monitoring. If set to true, libvirt metrics will be gathered on all libvirt based Compute Nodes. This setting is available in the Crowbar UI.

monitor_ceph

The global switch for toggling Ceph monitoring. If set to true, Ceph metrics will be gathered on all Ceph-based Compute Nodes. This setting is available in Crowbar UI. If the Ceph cluster has been set up independently, Crowbar ignores this setting.

cache_dir

The directory where monasca-agent will locally cache various metadata about locally running VMs on each Compute Node.

customer_metadata

Specifies the list of instance metadata keys to be included as dimensions with customer metrics. This is useful for providing more information about an instance.

disk_collection_period

Specifies a minimum interval in seconds for collecting disk metrics. Increase this value to reduce I/O load. If the value is lower than the global agent collection period (check_frequency), it will be ignored in favor of the global collection period.

max_ping_concurrency

Specifies the number of ping command processes to run concurrently when determining whether the VM is reachable. This should be set to a value that allows the plugin to finish within the agent's collection period, even if there is a networking issue. For example, if the expected number of VMs per Compute Node is 40 and each VM has one IP address, then the plugin will take at least 40 seconds to do the ping checks in the worst-case scenario where all pings fail (assuming the default timeout of 1 second). Increasing max_ping_concurrency allows the plugin to finish faster.

metadata

Specifies the list of nova side instance metadata keys to be included as dimensions with the cross-tenant metrics for the monasca project. This is useful for providing more information about an instance.

nova_refresh

Specifies the number of seconds between calls to the nova API to refresh the instance cache. This is helpful for updating VM hostname and pruning deleted instances from the cache. By default, it is set to 14,400 seconds (four hours). Set to 0 to refresh every time the Collector runs, or to None to disable regular refreshes entirely. In this case, the instance cache will only be refreshed when a new instance is detected.

ping_check

Includes the entire ping command (without the IP address, which is automatically appended) to perform a ping check against instances. The NAMESPACE keyword is automatically replaced with the appropriate network namespace for the VM being monitored. Set to False to disable ping checks.

vnic_collection_period

Specifies a minimum interval in seconds for collecting disk metrics. Increase this value to reduce I/O load. If the value is lower than the global agent collection period (check_frequency), it will be ignored in favor of the global collection period.

vm_cpu_check_enable

Toggles the collection of VM CPU metrics. Set to true to enable.

vm_disks_check_enable

Toggles the collection of VM disk metrics. Set to true to enable.

vm_extended_disks_check_enable

Toggles the collection of extended disk metrics. Set to true to enable.

vm_network_check_enable

Toggles the collection of VM network metrics. Set to true to enable.

vm_ping_check_enable

Toggles ping checks for checking whether a host is alive. Set to true to enable.

vm_probation

Specifies a period of time (in seconds) in which to suspend metrics from a newly-created VM. This is to prevent quickly-obsolete metrics in an environment with a high amount of instance churn (VMs created and destroyed in rapid succession). The default probation length is 300 seconds (5 minutes). Set to 0 to disable VM probation. In this case, metrics are recorded immediately after a VM is created.

vnic_collection_period

Specifies a minimum interval in seconds for collecting VM network metrics. Increase this value to reduce I/O load. If the value is lower than the global agent collection period (check_frequency), it will be ignored in favor of the global collection period.

Deployment

The monasca component consists of following roles:

monasca-server

monasca server-side components that are deployed by Chef. Currently, this only creates keystone resources required by monasca, such as users, roles, endpoints, etc. The rest is left to the Ansible-based monasca-installer run by the monasca-master role.

monasca-master

Runs the Ansible-based monasca-installer from the Crowbar node. The installer deploys the monasca server-side components to the node that has the monasca-server role assigned to it. These components are openstack-monasca-api, and openstack-monasca-log-api, as well as all the back-end services they use.

monasca-agent

Deploys openstack-monasca-agent that is responsible for sending metrics to monasca-api on nodes it is assigned to.

monasca-log-agent

Deploys openstack-monasca-log-agent responsible for sending logs to monasca-log-api on nodes it is assigned to.

The monasca Barclamp: Node Deployment Example
Figure 12.10: The monasca Barclamp: Node Deployment Example

12.7 Deploying swift (optional)

swift adds an object storage service to SUSE OpenStack Cloud for storing single files such as images or snapshots. It offers high data security by storing the data redundantly on a pool of Storage Nodes—therefore swift needs to be installed on at least two dedicated nodes.

To properly configure swift it is important to understand how it places the data. Data is always stored redundantly within the hierarchy. The swift hierarchy in SUSE OpenStack Cloud is formed out of zones, nodes, hard disks, and logical partitions. Zones are physically separated clusters, for example different server rooms each with its own power supply and network segment. A failure of one zone must not affect another zone. The next level in the hierarchy are the individual swift storage nodes (on which swift-storage has been deployed), followed by the hard disks. Logical partitions come last.

swift automatically places three copies of each object on the highest hierarchy level possible. If three zones are available, then each copy of the object will be placed in a different zone. In a one zone setup with more than two nodes, the object copies will each be stored on a different node. In a one zone setup with two nodes, the copies will be distributed on different hard disks. If no other hierarchy element fits, logical partitions are used.

The following attributes can be set to configure swift:

Allow Public Containers

Set to true to enable public access to containers.

Enable Object Versioning

If set to true, a copy of the current version is archived each time an object is updated.

Zones

Number of zones (see above). If you do not have different independent installations of storage nodes, set the number of zones to 1.

Create 2^X Logical Partitions

Partition power. The number entered here is used to compute the number of logical partitions to be created in the cluster. The number you enter is used as a power of 2 (2^X).

We recommend using a minimum of 100 partitions per disk. To measure the partition power for your setup, multiply the number of disks from all swift nodes by 100, and then round up to the nearest power of two. Keep in mind that the first disk of each node is not used by swift, but rather for the operating system.

Example: 10 swift nodes with 5 hard disks each.  Four hard disks on each node are used for swift, so there is a total of forty disks. 40 x 100 = 4000. The nearest power of two, 4096, equals 2^12. So the partition power that needs to be entered is 12.

Important
Important: Value Cannot be Changed After the Proposal Has Been Deployed

Changing the number of logical partition after swift has been deployed is not supported. Therefore the value for the partition power should be calculated from the maximum number of partitions this cloud installation is likely going to need at any point in time.

Minimum Hours before Partition is reassigned

This option sets the number of hours before a logical partition is considered for relocation. 24 is the recommended value.

Replicas

The number of copies generated for each object. The number of replicas depends on the number of disks and zones.

Replication interval (in seconds)

Time (in seconds) after which to start a new replication process.

Debug

Shows debugging output in the log files when set to true.

SSL Support: Protocol

Choose whether to encrypt public communication (HTTPS) or not (HTTP). If you choose HTTPS, you have two options. You can either Generate (self-signed) certificates or provide the locations for the certificate key pair files. Using self-signed certificates is for testing purposes only and should never be used in production environments!

The swift Barclamp
Figure 12.11: The swift Barclamp

Apart from the general configuration described above, the swift barclamp lets you also activate and configure Additional Middlewares. The features these middlewares provide can be used via the swift command line client only. The Ratelimit and S3 middleware provide for the most interesting features, and we recommend enabling other middleware only for specific use-cases.

S3 Middleware

Provides an S3 compatible API on top of swift.

StaticWeb

Serve container data as a static Web site with an index file and optional file listings. See http://docs.openstack.org/developer/swift/middleware.html#staticweb for details.

This middleware requires setting Allow Public Containers to true.

TempURL

Create URLs to provide time-limited access to objects. See http://docs.openstack.org/developer/swift/middleware.html#tempurl for details.

FormPOST

Upload files to a container via Web form. See http://docs.openstack.org/developer/swift/middleware.html#formpost for details.

Bulk

Extract TAR archives into a swift account, and delete multiple objects or containers with a single request. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.bulk for details.

Cross-domain

Interact with the swift API via Flash, Java, and Silverlight from an external network. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.crossdomain for details.

Domain Remap

Translates container and account parts of a domain to path parameters that the swift proxy server understands. Can be used to create short URLs that are easy to remember, for example by rewriting home.tux.example.com/$ROOT/tux/home/myfile to home.tux.example.com/myfile. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.domain_remap for details.

Ratelimit

Throttle resources such as requests per minute to provide denial of service protection. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.ratelimit for details.

The swift component consists of four different roles. Deploying swift-dispersion is optional:

swift-storage

The virtual object storage service. Install this role on all dedicated swift Storage Nodes (at least two), but not on any other node.

Warning
Warning: swift-storage Needs Dedicated Machines

Never install the swift-storage service on a node that runs other OpenStack components.

swift-ring-compute

The ring maintains the information about the location of objects, replicas, and devices. It can be compared to an index that is used by various OpenStack components to look up the physical location of objects. swift-ring-compute must only be installed on a single node, preferably a Control Node.

swift-proxy

The swift proxy server takes care of routing requests to swift. Installing a single instance of swift-proxy on a Control Node is recommended. The swift-proxy role can be made highly available by deploying it on a cluster.

swift-dispersion

Deploying swift-dispersion is optional. The swift dispersion tools can be used to test the health of the cluster. It creates a heap of dummy objects (using 1% of the total space available). The state of these objects can be queried using the swift-dispersion-report query. swift-dispersion needs to be installed on a Control Node.

The swift Barclamp: Node Deployment Example
Figure 12.12: The swift Barclamp: Node Deployment Example

12.7.1 HA Setup for swift

swift replicates by design, so there is no need for a special HA setup. Make sure to fulfill the requirements listed in Section 2.6.4.1, “swift—Avoiding Points of Failure”.

12.8 Deploying glance

glance provides discovery, registration, and delivery services for virtual disk images. An image is needed to start an instance—it is its pre-installed root-partition. All images you want to use in your cloud to boot instances from, are provided by glance. glance must be deployed onto a Control Node. glance can be made highly available by deploying it on a cluster.

There are a lot of options to configure glance. The most important ones are explained below—for a complete reference refer to https://github.com/crowbar/crowbar-openstack/blob/master/glance.yml.

Important
Important: glance API Versions

As of SUSE OpenStack Cloud Crowbar 7, the glance API v1 is no longer enabled by default. Instead, glance API v2 is used by default.

If you need to re-enable API v1 for compatibility reasons:

  1. Switch to the Raw view of the glance barclamp.

  2. Search for the enable_v1 entry and set it to true:

    "enable_v1": true

    In new installations, this entry is set to false by default. When upgrading from an older version of SUSE OpenStack Cloud Crowbar it is set to true by default.

  3. Apply your changes.

Image Storage: Default Storage Store

File Images are stored in an image file on the Control Node.

cinder Provides volume block storage to SUSE OpenStack Cloud Crowbar. Use it to store images.

swift Provides an object storage service to SUSE OpenStack Cloud Crowbar.

Rados SUSE Enterprise Storage (based on Ceph) provides block storage service to SUSE OpenStack Cloud Crowbar.

VMware If you are using VMware as a hypervisor, it is recommended to use VMware for storing images. This will make starting VMware instances much faster.

Expose Backend Store Location If this is set to true, the API will communicate the direct URl of the image's back-end location to HTTP clients. Set to false by default.

Depending on the storage back-end, there are additional configuration options available:

File Store Parameters

Only required if Default Storage Store is set to File.

Image Store Directory

Specify the directory to host the image file. The directory specified here can also be an NFS share. See Section 11.4.3, “Mounting NFS Shares on a Node” for more information.

swift Store Parameters

Only required if Default Storage Store is set to swift.

swift Container

Set the name of the container to use for the images in swift.

RADOS Store Parameters

Only required if Default Storage Store is set to Rados.

RADOS User for CephX Authentication

If you are using an external Ceph cluster, specify the user you have set up for glance (see Section 11.4.4, “Using an Externally Managed Ceph Cluster” for more information).

RADOS Pool for glance images

If you are using a SUSE OpenStack Cloud internal Ceph setup, the pool you specify here is created if it does not exist. If you are using an external Ceph cluster, specify the pool you have set up for glance (see Section 11.4.4, “Using an Externally Managed Ceph Cluster” for more information).

VMware Store Parameters

Only required if Default Storage Store is set to VMware.

vCenter Host/IP Address

Name or IP address of the vCenter server.

vCenter Username / vCenter Password

vCenter login credentials.

Datastores for Storing Images

A comma-separated list of datastores specified in the format: DATACENTER_NAME:DATASTORE_NAME

Path on the datastore, where the glance images will be stored

Specify an absolute path here.

SSL Support: Protocol

Choose whether to encrypt public communication (HTTPS) or not (HTTP). If you choose HTTPS, refer to SSL Support: Protocol for configuration details.

Caching

Enable and configure image caching in this section. By default, image caching is disabled. You can see this the Raw view of your nova barclamp:

image_cache_manager_interval = -1

This option sets the number of seconds to wait between runs of the image cache manager. Disabling it means that the cache manager will not automatically remove the unused images from the cache, so if you have many glance images and are running out of storage you must manually remove the unused images from the cache. We recommend leaving this option disabled as it is known to cause issues, especially with shared storage. The cache manager may remove images still in use, e.g. when network outages cause synchronization problems with compute nodes.

If you wish to enable caching, re-enable it in a custom nova configuration file, for example /etc/nova/nova.conf.d/500-nova.conf. This sets the interval to four minutes:

image_cache_manager_interval = 2400

See Chapter 14, Configuration Files for OpenStack Services for more information on custom configurations.

Learn more about glance's caching feature at http://docs.openstack.org/developer/glance/cache.html.

Logging: Verbose Logging

Shows debugging output in the log files when set to true.

The glance Barclamp
Figure 12.13: The glance Barclamp

12.8.1 HA Setup for glance

glance can be made highly available by deploying it on a cluster. We strongly recommended doing this for the image data as well. The recommended way is to use swift or an external Ceph cluster for the image repository. If you are using a directory on the node instead (file storage back-end), you should set up shared storage on the cluster for it.

12.9 Deploying cinder

cinder, the successor of nova-volume, provides volume block storage. It adds persistent storage to an instance that persists until deleted, contrary to ephemeral volumes that only persist while the instance is running.

cinder can provide volume storage by using different back-ends such as local file, one or more local disks, Ceph (RADOS), VMware, or network storage solutions from EMC, EqualLogic, Fujitsu, NetApp or Pure Storage. Since SUSE OpenStack Cloud Crowbar 5, cinder supports using several back-ends simultaneously. It is also possible to deploy the same network storage back-end multiple times and therefore use different installations at the same time.

The attributes that can be set to configure cinder depend on the back-end. The only general option is SSL Support: Protocol (see SSL Support: Protocol for configuration details).

Tip
Tip: Adding or Changing a Back-End

When first opening the cinder barclamp, the default proposal—Raw Devices—is already available for configuration. To optionally add a back-end, go to the section Add New cinder Back-End and choose a Type Of Volume from the drop-down box. Optionally, specify the Name for the Backend. This is recommended when deploying the same volume type more than once. Existing back-end configurations (including the default one) can be deleted by clicking the trashcan icon if no longer needed. Note that you must configure at least one back-end.

Raw devices (local disks)

Disk Selection Method

Choose whether to use the First Available disk or All Available disks. Available disks are all disks currently not used by the system. Note that one disk (usually /dev/sda) of every block storage node is already used for the operating system and is not available for cinder.

Name of Volume

Specify a name for the cinder volume.

EMC (EMC² Storage)

IP address of the ECOM server / Port of the ECOM server

IP address and Port of the ECOM server.

Username for accessing the ECOM server / Password for accessing the ECOM server

Login credentials for the ECOM server.

VMAX port groups to expose volumes managed by this backend

VMAX port groups that expose volumes managed by this back-end.

Serial number of the VMAX Array

Unique VMAX array serial number.

Pool name within a given array

Unique pool name within a given array.

FAST Policy name to be used

Name of the FAST Policy to be used. When specified, volumes managed by this back-end are managed as under FAST control.

For more information on the EMC driver refer to the OpenStack documentation at http://docs.openstack.org/liberty/config-reference/content/emc-vmax-driver.html.

EqualLogic

EqualLogic drivers are included as a technology preview and are not supported.

Fujitsu ETERNUS DX

Connection Protocol

Select the protocol used to connect, either FibreChannel or iSCSI.

IP for SMI-S / Port for SMI-S

IP address and port of the ETERNUS SMI-S Server.

Username for SMI-S / Password for SMI-S

Login credentials for the ETERNUS SMI-S Server.

Snapshot (Thick/RAID Group) Pool Name

Storage pool (RAID group) in which the volumes are created. Make sure that the RAID group on the server has already been created. If a RAID group that does not exist is specified, the RAID group is built from unused disk drives. The RAID level is automatically determined by the ETERNUS DX Disk storage system.

Hitachi HUSVM

For information on configuring the Hitachi HUSVM back-end, refer to http://docs.openstack.org/ocata/config-reference/block-storage/drivers/hitachi-storage-volume-driver.html.

NetApp

Storage Family Type / Storage Protocol

SUSE OpenStack Cloud can use Data ONTAP in 7-Mode, or in Clustered Mode. In 7-Mode vFiler will be configured, in Clustered Mode vServer will be configured. The Storage Protocol can be set to either iSCSI or NFS. Choose the driver and the protocol your NetApp is licensed for.

Server host name

The management IP address for the 7-Mode storage controller, or the cluster management IP address for the clustered Data ONTAP.

Transport Type

Transport protocol for communicating with the storage controller or clustered Data ONTAP. Supported protocols are HTTP and HTTPS. Choose the protocol your NetApp is licensed for.

Server port

The port to use for communication. Port 80 is usually used for HTTP, 443 for HTTPS.

Username for accessing NetApp / Password for Accessing NetApp

Login credentials.

The vFiler Unit Name for provisioning OpenStack volumes (netapp_vfiler)

The vFiler unit to be used for provisioning of OpenStack volumes. This setting is only available in 7-Mode.

Restrict provisioning on iSCSI to these volumes (netapp_volume_list)

Provide a list of comma-separated volume names to be used for provisioning. This setting is only available when using iSCSI as storage protocol.

NFS

List of NFS Exports

A list of available file systems on an NFS server. Enter your NFS mountpoints in the List of NFS Exports form in this format: host:mountpoint -o options. For example:

host1:/srv/nfs/share1 /mnt/nfs/share1 -o rsize=8192,wsize=8192,timeo=14,intr

Pure Storage (FlashArray)

IP address of the management VIP

IP address of the FlashArray management VIP

API token for the FlashArray

API token for access to the FlashArray

iSCSI CHAP authentication enabled

Enable or disable iSCSI CHAP authentication

For more information on the Pure Storage FlashArray driver refer to the OpenStack documentation at https://docs.openstack.org/ocata/config-reference/block-storage/drivers/pure-storage-driver.html.

RADOS (Ceph)

Use Ceph Deployed by Crowbar

Select false, if you are using an external Ceph cluster (see Section 11.4.4, “Using an Externally Managed Ceph Cluster” for setup instructions).

RADOS pool for cinder volumes

Name of the pool used to store the cinder volumes.

RADOS user (Set Only if Using CephX authentication)

Ceph user name.

VMware Parameters

vCenter Host/IP Address

Host name or IP address of the vCenter server.

vCenter Username / vCenter Password

vCenter login credentials.

vCenter Cluster Names for Volumes

Provide a comma-separated list of cluster names.

Folder for Volumes

Path to the directory used to store the cinder volumes.

CA file for verifying the vCenter certificate

Absolute path to the vCenter CA certificate.

vCenter SSL Certificate is insecure (for instance, self-signed)

Default value: false (the CA truststore is used for verification). Set this option to true when using self-signed certificates to disable certificate checks. This setting is for testing purposes only and must not be used in production environments!

Local file

Volume File Name

Absolute path to the file to be used for block storage.

Maximum File Size (GB)

Maximum size of the volume file. Make sure not to overcommit the size, since it will result in data loss.

Name of Volume

Specify a name for the cinder volume.

Note
Note: Using Local File for Block Storage

Using a file for block storage is not recommended for production systems, because of performance and data security reasons.

Other driver

Lets you manually pick and configure a driver. Only use this option for testing purposes, as it is not supported.

The cinder Barclamp
Figure 12.14: The cinder Barclamp

The cinder component consists of two different roles:

cinder-controller

The cinder controller provides the scheduler and the API. Installing cinder-controller on a Control Node is recommended.

cinder-volume

The virtual block storage service. It can be installed on a Control Node. However, we recommend deploying it on one or more dedicated nodes supplied with sufficient networking capacity to handle the increase in network traffic.

The cinder Barclamp: Node Deployment Example
Figure 12.15: The cinder Barclamp: Node Deployment Example

12.9.1 HA Setup for cinder

Both the cinder-controller and the cinder-volume role can be deployed on a cluster.

Note
Note: Moving cinder-volume to a Cluster

If you need to re-deploy cinder-volume role from a single machine to a cluster environment, the following will happen: Volumes that are currently attached to instances will continue to work, but adding volumes to instances will not succeed.

To solve this issue, run the following script once on each node that belongs to the cinder-volume cluster: /usr/bin/cinder-migrate-volume-names-to-cluster.

The script is automatically installed by Crowbar on every machine or cluster that has a cinder-volume role applied to it.

In combination with Ceph or a network storage solution, deploying cinder in a cluster minimizes the potential downtime. For cinder-volume to be applicable to a cluster, the role needs all cinder backends to be configured for non-local storage. If you are using local volumes or raw devices in any of your volume backends, you cannot apply cinder-volume to a cluster.

12.10 Deploying neutron

neutron provides network connectivity between interface devices managed by other OpenStack components (most likely nova). The service works by enabling users to create their own networks and then attach interfaces to them.

neutron must be deployed on a Control Node. You first need to choose a core plug-in—ml2 or vmware. Depending on your choice, more configuration options will become available.

The vmware option lets you use an existing VMware NSX installation. Using this plugin is not a prerequisite for the VMware vSphere hypervisor support. However, it is needed when wanting to have security groups supported on VMware compute nodes. For all other scenarios, choose ml2.

The only global option that can be configured is SSL Support. Choose whether to encrypt public communication (HTTPS) or not (HTTP). If choosing HTTPS, refer to SSL Support: Protocol for configuration details.

ml2 (Modular Layer 2)

Modular Layer 2 Mechanism Drivers

Select which mechanism driver(s) shall be enabled for the ml2 plugin. It is possible to select more than one driver by holding the Ctrl key while clicking. Choices are:

openvswitch Supports GRE, VLAN and VXLAN networks (to be configured via the Modular Layer 2 type drivers setting). VXLAN is the default.

linuxbridge Supports VLANs only. Requires to specify the Maximum Number of VLANs.

cisco_nexus Enables neutron to dynamically adjust the VLAN settings of the ports of an existing Cisco Nexus switch when instances are launched. It also requires openvswitch which will automatically be selected. With Modular Layer 2 type drivers, vlan must be added. This option also requires to specify the Cisco Switch Credentials. See Appendix A, Using Cisco Nexus Switches with neutron for details.

vmware_dvs vmware_dvs driver makes it possible to use neutron for networking in a VMware-based environment. Choosing vmware_dvs, automatically selects the required openswitch, vxlan, and vlan drivers. In the Raw view, it is also possible to configure two additional attributes: clean_on_start (clean up the DVS portgroups on the target vCenter Servers when neutron-server is restarted) and precreate_networks (create DVS portgroups corresponding to networks in advance, rather than when virtual machines are attached to these networks).

Use Distributed Virtual Router Setup

With the default setup, all intra-Compute Node traffic flows through the network Control Node. The same is true for all traffic from floating IPs. In large deployments the network Control Node can therefore quickly become a bottleneck. When this option is set to true, network agents will be installed on all compute nodes. This will de-centralize the network traffic, since Compute Nodes will be able to directly talk to each other. Distributed Virtual Routers (DVR) require the openvswitch driver and will not work with the linuxbridge driver. For details on DVR refer to https://wiki.openstack.org/wiki/Neutron/DVR.

Modular Layer 2 Type Drivers

This option is only available when having chosen the openvswitch or the cisco_nexus mechanism drivers. Options are vlan, gre and vxlan. It is possible to select more than one driver by holding the Ctrl key while clicking.

When multiple type drivers are enabled, you need to select the Default Type Driver for Provider Network, that will be used for newly created provider networks. This also includes the nova_fixed network, that will be created when applying the neutron proposal. When manually creating provider networks with the neutron command, the default can be overwritten with the --provider:network_type type switch. You will also need to set a Default Type Driver for Tenant Network. It is not possible to change this default when manually creating tenant networks with the neutron command. The non-default type driver will only be used as a fallback.

Depending on your choice of the type driver, more configuration options become available.

gre Having chosen gre, you also need to specify the start and end of the tunnel ID range.

vlan The option vlan requires you to specify the Maximum number of VLANs.

vxlan Having chosen vxlan, you also need to specify the start and end of the VNI range.

Important
Important: Drivers for the VMware Compute Node

neutron must not be deployed with the openvswitch with gre plug-in.

z/VM Configuration

xCAT Host/IP Address

Host name or IP address of the xCAT Management Node.

xCAT Username/Password

xCAT login credentials.

rdev list for physnet1 vswitch uplink (if available)

List of rdev addresses that should be connected to this vswitch.

xCAT IP Address on Management Network

IP address of the xCAT management interface.

Net Mask of Management Network

Net mask of the xCAT management interface.

vmware

This plug-in requires to configure access to the VMware NSX service.

VMware NSX User Name/Password

Login credentials for the VMware NSX server. The user needs to have administrator permissions on the NSX server.

VMware NSX Controllers

Enter the IP address and the port number (IP-ADDRESS:PORT) of the controller API endpoint. If the port number is omitted, port 443 will be used. You may also enter multiple API endpoints (comma-separated), provided they all belong to the same controller cluster. When multiple API endpoints are specified, the plugin will load balance requests on the various API endpoints.

UUID of the NSX Transport Zone/Gateway Service

The UUIDs for the transport zone and the gateway service can be obtained from the NSX server. They will be used when networks are created.

The neutron Barclamp
Figure 12.16: The neutron Barclamp

The neutron component consists of two different roles:

neutron-server

neutron-server provides the scheduler and the API. It needs to be installed on a Control Node.

neutron-network

This service runs the various agents that manage the network traffic of all the cloud instances. It acts as the DHCP and DNS server and as a gateway for all cloud instances. It is recommend to deploy this role on a dedicated node supplied with sufficient network capacity.

The neutron barclamp
Figure 12.17: The neutron barclamp

12.10.1 Using Infoblox IPAM Plug-in

In the neutron barclamp, you can enable support for the infoblox IPAM plug-in and configure it. For configuration, the infoblox section contains the subsections grids and grid_defaults.

grids

This subsection must contain at least one entry. For each entry, the following parameters are required:

  • admin_user_name

  • admin_password

  • grid_master_host

  • grid_master_name

  • data_center_name

You can also add multiple entries to the grids section. However, the upstream infoblox agent only supports a single grid currently.

grid_defaults

This subsection contains the default settings that are used for each grid (unless you have configured specific settings within the grids section).

For detailed information on all infoblox-related configuration settings, see https://github.com/openstack/networking-infoblox/blob/master/doc/source/installation.rst.

Currently, all configuration options for infoblox are only available in the raw mode of the neutron barclamp. To enable support for the infoblox IPAM plug-in and configure it, proceed as follows:

  1. Edit the neutron barclamp proposal or create a new one.

  2. Click Raw and search for the following section:

    "use_infoblox": false,
  3. To enable support for the infoblox IPAM plug-in, change this entry to:

    "use_infoblox": true,
  4. In the grids section, configure at least one grid by replacing the example values for each parameter with real values.

  5. If you need specific settings for a grid, add some of the parameters from the grid_defaults section to the respective grid entry and adjust their values.

    Otherwise Crowbar applies the default setting to each grid when you save the barclamp proposal.

  6. Save your changes and apply them.

12.10.2 HA Setup for neutron

neutron can be made highly available by deploying neutron-server and neutron-network on a cluster. While neutron-server may be deployed on a cluster shared with other services, it is strongly recommended to use a dedicated cluster solely for the neutron-network role.

12.10.3 Setting Up Multiple External Networks

This section shows you how to create external networks on SUSE OpenStack Cloud.

12.10.3.1 New Network Configurations

  1. If you have not yet deployed Crowbar, add the following configuration to /etc/crowbar/network.json to set up an external network, using the name of your new network, VLAN ID, and network addresses. If you have already deployed Crowbar, then add this configuration to the Raw view of the Network Barclamp.

    "public2": {
              "conduit": "intf1",
              "vlan": 600,
              "use_vlan": true,
              "add_bridge": false,
              "subnet": "192.168.135.128",
              "netmask": "255.255.255.128",
              "broadcast": "192.168.135.255",
              "ranges": {
                "host": { "start": "192.168.135.129",
                   "end": "192.168.135.254" }
              }
        },
  2. Modify the additional_external_networks in the Raw view of the neutron Barclamp with the name of your new external network.

  3. Apply both barclamps, and it may also be necessary to re-apply the nova Barclamp.

  4. Then follow the steps in the next section to create the new external network.

12.10.3.2 Create the New External Network

The following steps add the network settings, including IP address pools, gateway, routing, and virtual switches to your new network.

  1. Set up interface mapping using either Open vSwitch (OVS) or Linuxbridge. For Open vSwitch run the following command:

    openstack network create public2 --provider:network_type flat \
     --provider:physical_network public2 --router:external=True

    For Linuxbridge run the following command:

     openstack network create --router:external True --provider:physical_network physnet1 \
     --provider:network_type vlan --provider:segmentation_id 600
  2. If a different network is used then Crowbar will create a new interface mapping. Then you can use a flat network:

    openstack network create public2 --provider:network_type flat \
     --provider:physical_network public2 --router:external=True
  3. Create a subnet:

    openstack subnet create --name public2 --allocation-pool \
     start=192.168.135.2,end=192.168.135.127 --gateway 192.168.135.1 public2 \
     192.168.135.0/24 --enable_dhcp False
  4. Create a router, router2:

    openstack router create router2
  5. Connect router2 to the new external network:

    openstack router set router2  public2
  6. Create a new private network and connect it to router2

    openstack network create priv-net
    openstack subnet create priv-net --gateway 10.10.10.1 10.10.10.0/24 \
     --name priv-net-sub
    openstack router add subnet router2 priv-net-sub
  7. Boot a VM on priv-net-sub and set a security group that allows SSH.

  8. Assign a floating IP address to the VM, this time from network public2.

  9. From the node verify that SSH is working by opening an SSH session to the VM.

12.10.3.3 How the Network Bridges are Created

For OVS, a new bridge will be created by Crowbar, in this case br-public2. In the bridge mapping the new network will be assigned to the bridge. The interface specified in /etc/crowbar/network.json (in this case eth0.600) will be plugged into br-public2. The new public network can be created in neutron using the new public network name as provider:physical_network.

For Linuxbridge, Crowbar will check the interface associated with public2. If this is the same as physnet1 no interface mapping will be created. The new public network can be created in neutron using physnet1 as physical network and specifying the correct VLAN ID:

openstack network create public2 --router:external True \
 --provider:physical_network physnet1 --provider:network_type vlan \
 --provider:segmentation_id 600

A bridge named brq-NET_ID will be created and the interface specified in /etc/crowbar/network.json will be plugged into it. If a new interface is associated in /etc/crowbar/network.json with public2 then Crowbar will add a new interface mapping and the second public network can be created using public2 as the physical network:

openstack network create public2 --provider:network_type flat \
 --provider:physical_network public2 --router:external=True

12.11 Deploying nova

nova provides key services for managing the SUSE OpenStack Cloud, sets up the Compute Nodes. SUSE OpenStack Cloud currently supports KVM and VMware vSphere. The unsupported QEMU option is included to enable test setups with virtualized nodes. The following attributes can be configured for nova:

Scheduler Options: Virtual RAM to Physical RAM allocation ratio

Set the overcommit ratio for RAM for instances on the Compute Nodes. A ratio of 1.0 means no overcommitment. Changing this value is not recommended.

Scheduler Options: Virtual CPU to Physical CPU allocation ratio

Set the overcommit ratio for CPUs for instances on the Compute Nodes. A ratio of 1.0 means no overcommitment.

Scheduler Options: Virtual Disk to Physical Disk allocation ratio

Set the overcommit ratio for virtual disks for instances on the Compute Nodes. A ratio of 1.0 means no overcommitment.

Scheduler Options: Reserved Memory for nova-compute hosts (MB)

Amount of reserved host memory that is not used for allocating VMs by nova-compute.

Live Migration Support: Enable Libvirt Migration

Allows to move KVM instances to a different Compute Node running the same hypervisor (cross hypervisor migrations are not supported). Useful when a Compute Node needs to be shut down or rebooted for maintenance or when the load of the Compute Node is very high. Instances can be moved while running (Live Migration).

Warning
Warning: Libvirt Migration and Security

Enabling the libvirt migration option will open a TCP port on the Compute Nodes that allows access to all instances from all machines in the admin network. Ensure that only authorized machines have access to the admin network when enabling this option.

Tip
Tip: Specifying Network for Live Migration

It is possible to change a network to live migrate images. This is done in the raw view of the nova barclamp. In the migration section, change the network attribute to the appropriate value (for example, storage for Ceph).

KVM Options: Enable Kernel Samepage Merging

Kernel SamePage Merging (KSM) is a Linux Kernel feature which merges identical memory pages from multiple running processes into one memory region. Enabling it optimizes memory usage on the Compute Nodes when using the KVM hypervisor at the cost of slightly increasing CPU usage.

SSL Support: Protocol

Choose whether to encrypt public communication (HTTPS) or not (HTTP). If choosing HTTPS,refer to SSL Support: Protocol for configuration details.

VNC Settings: NoVNC Protocol

After having started an instance you can display its VNC console in the OpenStack Dashboard (horizon) via the browser using the noVNC implementation. By default this connection is not encrypted and can potentially be eavesdropped.

Enable encrypted communication for noVNC by choosing HTTPS and providing the locations for the certificate key pair files.

Logging: Verbose Logging

Shows debugging output in the log files when set to true.

Note
Note: Custom Vendor Data for Instances

You can pass custom vendor data to all VMs via nova's metadata server. For example, information about a custom SMT server can be used by the SUSE guest images to automatically configure the repositories for the guest.

  1. To pass custom vendor data, switch to the Raw view of the nova barclamp.

  2. Search for the following section:

    "metadata": {
      "vendordata": {
        "json": "{}"
      }
    }
  3. As value of the json entry, enter valid JSON data. For example:

    "metadata": {
      "vendordata": {
        "json": "{\"CUSTOM_KEY\": \"CUSTOM_VALUE\"}"
      }
    }

    The string needs to be escaped because the barclamp file is in JSON format, too.

Use the following command to access the custom vendor data from inside a VM:

curl -s http://METADATA_SERVER/openstack/latest/vendor_data.json

The IP address of the metadata server is always the same from within a VM. For more details, see https://www.suse.com/communities/blog/vms-get-access-metadata-neutron/.

The nova Barclamp
Figure 12.18: The nova Barclamp

The nova component consists of eight different roles:

nova-controller

Distributing and scheduling the instances is managed by the nova-controller. It also provides networking and messaging services. nova-controller needs to be installed on a Control Node.

nova-compute-kvm / nova-compute-qemu / nova-compute-vmware /

Provides the hypervisors (KVM, QEMU, VMware vSphere, and z/VM) and tools needed to manage the instances. Only one hypervisor can be deployed on a single compute node. To use different hypervisors in your cloud, deploy different hypervisors to different Compute Nodes. A nova-compute-* role needs to be installed on every Compute Node. However, not all hypervisors need to be deployed.

Each image that will be made available in SUSE OpenStack Cloud to start an instance is bound to a hypervisor. Each hypervisor can be deployed on multiple Compute Nodes (except for the VMware vSphere role, see below). In a multi-hypervisor deployment you should make sure to deploy the nova-compute-* roles in a way, that enough compute power is available for each hypervisor.

Note
Note: Re-assigning Hypervisors

Existing nova-compute-* nodes can be changed in a production SUSE OpenStack Cloud without service interruption. You need to evacuate the node, re-assign a new nova-compute role via the nova barclamp and Apply the change. nova-compute-vmware can only be deployed on a single node.

The nova Barclamp: Node Deployment Example with Two KVM Nodes
Figure 12.19: The nova Barclamp: Node Deployment Example with Two KVM Nodes

When deploying a nova-compute-vmware node with the vmware_dvs ML2 driver enabled in the neutron barclamp, the following new attributes are also available in the vcenter section of the Raw mode:dvs_name (the name of the DVS switch configured on the target vCenter cluster) and dvs_security_groups (enable or disable implementing security groups through DVS traffic rules).

It is important to specify the correct dvs_name value, as the barclamp expects the DVS switch to be preconfigured on the target VMware vCenter cluster.

Warning
Warning: vmware_dvs must be enabled

Deploying nova-compute-vmware nodes will not result in a functional cloud setup if the vmware_dvs ML2 plugin is not enabled in the neutron barclamp.

12.11.1 HA Setup for nova

Making nova-controller highly available requires no special configuration—it is sufficient to deploy it on a cluster.

To enable High Availability for Compute Nodes, deploy the following roles to one or more clusters with remote nodes:

  • nova-compute-kvm

  • nova-compute-qemu

  • ec2-api

The cluster to which you deploy the roles above can be completely independent of the one to which the role nova-controller is deployed.

However, the nova-controller and ec2-api roles must be deployed the same way (either both to a cluster or both to individual nodes. This is due to Crowbar design limitations.

Tip
Tip: Shared Storage

It is recommended to use shared storage for the /var/lib/nova/instances directory, to ensure that ephemeral disks will be preserved during recovery of VMs from failed compute nodes. Without shared storage, any ephemeral disks will be lost, and recovery will rebuild the VM from its original image.

If an external NFS server is used, enable the following option in the nova barclamp proposal: Shared Storage for nova instances has been manually configured.

12.12 Deploying horizon (OpenStack Dashboard)

The last component that needs to be deployed is horizon, the OpenStack Dashboard. It provides a Web interface for users to start and stop instances and for administrators to manage users, groups, roles, etc. horizon should be installed on a Control Node. To make horizon highly available, deploy it on a cluster.

The following attributes can be configured:

Session Timeout

Timeout (in minutes) after which a user is been logged out automatically. The default value is set to four hours (240 minutes).

Note
Note: Timeouts Larger than Four Hours

Every horizon session requires a valid keystone token. These tokens also have a lifetime of four hours (14400 seconds). Setting the horizon session timeout to a value larger than 240 will therefore have no effect, and you will receive a warning when applying the barclamp.

To successfully apply a timeout larger than four hours, you first need to adjust the keystone token expiration accordingly. To do so, open the keystone barclamp in Raw mode and adjust the value of the key token_expiration. Note that the value has to be provided in seconds. When the change is successfully applied, you can adjust the horizon session timeout (in minutes). Note that extending the keystone token expiration may cause scalability issues in large and very busy SUSE OpenStack Cloud installations.

User Password Validation: Regular expression used for password validation

Specify a regular expression with which to check the password. The default expression (.{8,}) tests for a minimum length of 8 characters. The string you enter is interpreted as a Python regular expression (see http://docs.python.org/2.7/library/re.html#module-re for a reference).

User Password Validation: Text to display if the password does not pass validation

Error message that will be displayed in case the password validation fails.

SSL Support: Protocol

Choose whether to encrypt public communication (HTTPS) or not (HTTP). If choosing HTTPS, you have two choices. You can either Generate (self-signed) certificates or provide the locations for the certificate key pair files and,—optionally— the certificate chain file. Using self-signed certificates is for testing purposes only and should never be used in production environments!

The horizon Barclamp
Figure 12.20: The horizon Barclamp

12.12.1 HA Setup for horizon

Making horizon highly available requires no special configuration—it is sufficient to deploy it on a cluster.

12.13 Deploying heat (Optional)

heat is a template-based orchestration engine that enables you to, for example, start workloads requiring multiple servers or to automatically restart instances if needed. It also brings auto-scaling to SUSE OpenStack Cloud by automatically starting additional instances if certain criteria are met. For more information about heat refer to the OpenStack documentation at http://docs.openstack.org/developer/heat/.

heat should be deployed on a Control Node. To make heat highly available, deploy it on a cluster.

The following attributes can be configured for heat:

Verbose Logging

Shows debugging output in the log files when set to true.

SSL Support: Protocol

Choose whether to encrypt public communication (HTTPS) or not (HTTP). If choosing HTTPS, refer to SSL Support: Protocol for configuration details.

The heat Barclamp
Figure 12.21: The heat Barclamp

12.13.1 Enabling Identity Trusts Authorization (Optional)

heat uses keystone Trusts to delegate a subset of user roles to the heat engine for deferred operations (see Steve Hardy's blog for details). It can either delegate all user roles or only those specified in the trusts_delegated_roles setting. Consequently, all roles listed in trusts_delegated_roles need to be assigned to a user, otherwise the user will not be able to use heat.

The recommended setting for trusts_delegated_roles is member, since this is the default role most users are likely to have. This is also the default setting when installing SUSE OpenStack Cloud from scratch.

On installations where this setting is introduced through an upgrade, trusts_delegated_roles will be set to heat_stack_owner. This is a conservative choice to prevent breakage in situations where unprivileged users may already have been assigned the heat_stack_owner role to enable them to use heat but lack the member role. As long as you can ensure that all users who have the heat_stack_owner role also have the member role, it is both safe and recommended to change trusts_delegated_roles to member.

Important
Important

If the Octavia barclamp is deployed, the trusts_delegated_roles configuration option either needs to be set to an empty value, or the load-balancer_member role needs to be included, otherwise it won't be possible to create Octavia load balancers via heat stacks. Refer to the Section 12.20.3, “Migrating Users to Octavia” section for more details on the list of specialized roles employed by Octavia. Also note that adding the load-balancer_member role to the trusts_delegated_roles list has the undesired side effect that only users that have this role assigned to them will be allowed to access the Heat API, as covered previously in this section.

To view or change the trusts_delegated_role setting you need to open the heat barclamp and click Raw in the Attributes section. Search for the trusts_delegated_roles setting and modify the list of roles as desired.

the heat barclamp: Raw Mode
Figure 12.22: the heat barclamp: Raw Mode
Warning
Warning: Empty Value

An empty value for trusts_delegated_roles will delegate all of user roles to heat. This may create a security risk for users who are assigned privileged roles, such as admin, because these privileged roles will also be delegated to the heat engine when these users create heat stacks.

12.13.2 HA Setup for heat

Making heat highly available requires no special configuration—it is sufficient to deploy it on a cluster.

12.14 Deploying ceilometer (Optional)

ceilometer collects CPU and networking data from SUSE OpenStack Cloud. This data can be used by a billing system to enable customer billing. Deploying ceilometer is optional. ceilometer agents use monasca database to store collected data.

For more information about ceilometer refer to the OpenStack documentation at http://docs.openstack.org/developer/ceilometer/.

Important
Important: ceilometer Restrictions

As of SUSE OpenStack Cloud Crowbar 8 data measuring is only supported for KVM and Windows instances. Other hypervisors and SUSE OpenStack Cloud features such as object or block storage will not be measured.

The following attributes can be configured for ceilometer:

Intervals used for OpenStack Compute, Image, or Block Storage meter updates (in seconds)

Specify intervals in seconds after which ceilometer performs updates of specified meters.

How long are metering samples kept in the database (in days)

Specify how long to keep the metering data. -1 means that samples are kept in the database forever.

How long are event samples kept in the database (in days)

Specify how long to keep the event data. -1 means that samples are kept in the database forever.

The ceilometer Barclamp
Figure 12.23: The ceilometer Barclamp

The ceilometer component consists of four different roles:

ceilometer-server

The notification agent.

ceilometer-central

The polling agent listens to the message bus to collect data. It needs to be deployed on a Control Node. It can be deployed on the same node as ceilometer-server.

ceilometer-agent

The compute agents collect data from the compute nodes. They need to be deployed on all KVM compute nodes in your cloud (other hypervisors are currently not supported).

ceilometer-swift-proxy-middleware

An agent collecting data from the swift nodes. This role needs to be deployed on the same node as swift-proxy.

The ceilometer Barclamp: Node Deployment
Figure 12.24: The ceilometer Barclamp: Node Deployment

12.14.1 HA Setup for ceilometer

Making ceilometer highly available requires no special configuration—it is sufficient to deploy the roles ceilometer-server and ceilometer-central on a cluster.

12.15 Deploying manila

manila provides coordinated access to shared or distributed file systems, similar to what cinder does for block storage. These file systems can be shared between instances in SUSE OpenStack Cloud.

manila uses different back-ends. As of SUSE OpenStack Cloud Crowbar 8 currently supported back-ends include Hitachi HNAS, NetApp Driver, and CephFS. Two more back-end options, Generic Driver and Other Driver are available for testing purposes and are not supported.

Note
Note: Limitations for CephFS Back-end

manila uses some CephFS features that are currently not supported by the SUSE Linux Enterprise Server 12 SP4 CephFS kernel client:

  • RADOS namespaces

  • MDS path restrictions

  • Quotas

As a result, to access CephFS shares provisioned by manila, you must use ceph-fuse. For details, see http://docs.openstack.org/developer/manila/devref/cephfs_native_driver.html.

When first opening the manila barclamp, the default proposal Generic Driver is already available for configuration. To replace it, first delete it by clicking the trashcan icon and then choose a different back-end in the section Add new manila Backend. Select a Type of Share and—optionally—provide a Name for Backend. Activate the back-end with Add Backend. Note that at least one back-end must be configured.

The attributes that can be set to configure cinder depend on the back-end:

Back-end: Generic

The generic driver is included as a technology preview and is not supported.

Hitachi HNAS

Specify which EVS this backend is assigned to

Provide the name of the Enterprise Virtual Server that the selected back-end is assigned to.

Specify IP for mounting shares

IP address for mounting shares.

Specify file-system name for creating shares

Provide a file-system name for creating shares.

HNAS management interface IP

IP address of the HNAS management interface for communication between manila controller and HNAS.

HNAS username Base64 String

HNAS username Base64 String required to perform tasks like creating file-systems and network interfaces.

HNAS user password

HNAS user password. Required only if private key is not provided.

RSA/DSA private key

RSA/DSA private key necessary for connecting to HNAS. Required only if password is not provided.

The time to wait for stalled HNAS jobs before aborting

Time in seconds to wait before aborting stalled HNAS jobs.

Back-end: Netapp

Name of the Virtual Storage Server (vserver)

Host name of the Virtual Storage Server.

Server Host Name

The name or IP address for the storage controller or the cluster.

Server Port

The port to use for communication. Port 80 is usually used for HTTP, 443 for HTTPS.

User name/Password for Accessing NetApp

Login credentials.

Transport Type

Transport protocol for communicating with the storage controller or cluster. Supported protocols are HTTP and HTTPS. Choose the protocol your NetApp is licensed for.

Back-end: CephFS

Use Ceph deployed by Crowbar

Set to true to use Ceph deployed with Crowbar.

Back-end: Manual

Lets you manually pick and configure a driver. Only use this option for testing purposes, it is not supported.

The manila Barclamp
Figure 12.25: The manila Barclamp

The manila component consists of two different roles:

manila-server

The manila server provides the scheduler and the API. Installing it on a Control Node is recommended.

manila-share

The shared storage service. It can be installed on a Control Node, but it is recommended to deploy it on one or more dedicated nodes supplied with sufficient disk space and networking capacity, since it will generate a lot of network traffic.

The manila Barclamp: Node Deployment Example
Figure 12.26: The manila Barclamp: Node Deployment Example

12.15.1 HA Setup for manila

While the manila-server role can be deployed on a cluster, deploying manila-share on a cluster is not supported. Therefore it is generally recommended to deploy manila-share on several nodes—this ensures the service continues to be available even when a node fails.

12.16 Deploying Tempest (Optional)

Tempest is an integration test suite for SUSE OpenStack Cloud written in Python. It contains multiple integration tests for validating your SUSE OpenStack Cloud deployment. For more information about Tempest refer to the OpenStack documentation at http://docs.openstack.org/developer/tempest/.

Important
Important: Technology Preview

Tempest is only included as a technology preview and not supported.

Tempest may be used for testing whether the intended setup will run without problems. It should not be used in a production environment.

Tempest should be deployed on a Control Node.

The following attributes can be configured for Tempest:

Choose User name / Password

Credentials for a regular user. If the user does not exist, it will be created.

Choose Tenant

Tenant to be used by Tempest. If it does not exist, it will be created. It is safe to stick with the default value.

Choose Tempest Admin User name/Password

Credentials for an admin user. If the user does not exist, it will be created.

The Tempest Barclamp
Figure 12.27: The Tempest Barclamp
Tip
Tip: Running Tests

To run tests with Tempest, log in to the Control Node on which Tempest was deployed. Change into the directory /var/lib/openstack-tempest-test. To get an overview of available commands, run:

./tempest --help

To serially invoke a subset of all tests (the gating smoketests) to help validate the working functionality of your local cloud instance, run the following command. It will save the output to a log file tempest_CURRENT_DATE.log.

./tempest run --smoke --serial 2>&1 \
| tee "tempest_$(date +%Y-%m-%d_%H%M%S).log"

12.16.1 HA Setup for Tempest

Tempest cannot be made highly available.

12.17 Deploying Magnum (Optional)

Magnum is an OpenStack project which offers container orchestration engines for deploying and managing containers as first class resources in OpenStack.

For more information about Magnum, see the OpenStack documentation at http://docs.openstack.org/developer/magnum/.

For information on how to deploy a Kubernetes cluster (either from command line or from the horizon Dashboard), see the Supplement to Administrator Guide and User Guide. It is available from https://documentation.suse.com/soc/9/.

The following Attributes can be configured for Magnum:

Trustee Domain: Delegate trust to cluster users if required

Deploying Kubernetes clusters in a cloud without an Internet connection requires the registry_enabled option in its cluster template set to true. To make this offline scenario work, you also need to set the Delegate trust to cluster users if required option to true. This restores the old, insecure behavior for clusters with the registry-enabled or volume_driver=Rexray options enabled.

Trustee Domain: Domain Name

Domain name to use for creating trustee for bays.

Logging: Verbose

Increases the amount of information that is written to the log files when set to true.

Logging: Debug

Shows debugging output in the log files when set to true.

Certificate Manager: Plugin

To store certificates, either use the barbican OpenStack service, a local directory (Local), or the Magnum Database (x590keypair).

Note
Note: barbican As Certificate Manager

If you choose to use barbican for managing certificates, make sure that the barbican barclamp is enabled.

The Magnum Barclamp
Figure 12.28: The Magnum Barclamp

The Magnum barclamp consists of the following roles: magnum-server. It can either be deployed on a Control Node or on a cluster—see Section 12.17.1, “HA Setup for Magnum”. When deploying the role onto a Control Node, additional RAM is required for the Magnum server. It is recommended to only deploy the role to a Control Node that has 16 GB RAM.

12.17.1 HA Setup for Magnum

Making Magnum highly available requires no special configuration. It is sufficient to deploy it on a cluster.

12.18 Deploying barbican (Optional)

barbican is a component designed for storing secrets in a secure and standardized manner protected by keystone authentication. Secrets include SSL certificates and passwords used by various OpenStack components.

barbican settings can be configured in Raw mode only. To do this, open the barbican barclamp Attribute configuration in Raw mode.

The barbican Barclamp: Raw Mode
Figure 12.29: The barbican Barclamp: Raw Mode

When configuring barbican, pay particular attention to the following settings:

  • bind_host Bind host for the barbican API service

  • bind_port Bind port for the barbican API service

  • processes Number of API processes to run in Apache

  • ssl Enable or disable SSL

  • threads Number of API worker threads

  • debug Enable or disable debug logging

  • enable_keystone_listener Enable or disable the keystone listener services

  • kek An encryption key (fixed-length 32-byte Base64-encoded value) for barbican's simple_crypto plugin. If left unspecified, the key will be generated automatically.

    Note
    Note: Existing Encryption Key

    If you plan to restore and use the existing barbican database after a full reinstall (including a complete wipe of the Crowbar node), make sure to save the specified encryption key beforehand. You will need to provide it after the full reinstall in order to access the data in the restored barbican database.

SSL Support: Protocol

With the default value HTTP, public communication will not be encrypted. Choose HTTPS to use SSL for encryption. See Section 2.3, “SSL Encryption” for background information and Section 11.4.6, “Enabling SSL” for installation instructions. The following additional configuration options will become available when choosing HTTPS:

Generate (self-signed) certificates

When set to true, self-signed certificates are automatically generated and copied to the correct locations. This setting is for testing purposes only and should never be used in production environments!

SSL Certificate File / SSL (Private) Key File

Location of the certificate key pair files.

SSL Certificate is insecure

Set this option to true when using self-signed certificates to disable certificate checks. This setting is for testing purposes only and should never be used in production environments!

SSL CA Certificates File

Specify the absolute path to the CA certificate. This field is mandatory, and leaving it blank will cause the barclamp to fail. To fix this issue, you have to provide the absolute path to the CA certificate, restart the apache2 service, and re-deploy the barclamp.

When the certificate is not already trusted by the pre-installed list of trusted root certificate authorities, you need to provide a certificate bundle that includes the root and all intermediate CAs.

The SSL Dialog
Figure 12.30: The SSL Dialog

12.18.1 HA Setup for barbican

To make barbican highly available, assign the barbican-controller role to the Controller Cluster.

12.19 Deploying sahara

sahara provides users with simple means to provision data processing frameworks (such as Hadoop, Spark, and Storm) on OpenStack. This is accomplished by specifying configuration parameters such as the framework version, cluster topology, node hardware details, etc.

Logging: Verbose

Set to true to increase the amount of information written to the log files.

The sahara Barclamp
Figure 12.31: The sahara Barclamp

12.19.1 HA Setup for sahara

Making sahara highly available requires no special configuration. It is sufficient to deploy it on a cluster.

12.20 Deploying Octavia

SUSE OpenStack Cloud Crowbar 9 provides Octavia Load Balancing as a Service (LBaaS). It is used to manage a fleet of virtual machines, containers, or bare metal servers—collectively known as amphorae — which it spins up on demand.

Note
Note

Starting with the SUSE OpenStack Cloud Crowbar 9 release, we recommend running Octavia as a standalone load balancing solution. Neutron LBaaS is deprecated in the OpenStack Queens release, and Octavia is its replacement. Whenever possible, operators are strongly advised to migrate to Octavia. For further information on OpenStack Neutron LBaaS deprecation, refer to https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation.

Important
Important

Deploying the Octavia barclamp does not automatically run all tasks required to complete the migration from Neutron LBaaS.

Please refer to Section 12.20.3, “Migrating Users to Octavia” for instructions on migrating existing users to allow them to access the Octavia load balancer API after the Octavia barclamp is deployed.

Please refer to Section 12.20.4, “Migrating Neutron LBaaS Instances to Octavia” for instructions on migrating existing Neutron LBaaS load balancer instances to Octavia and on disabling the deprecated Neutron LBaaS provider after the Octavia barclamp is deployed.

Octavia consists of the following major components:

amphorae

Amphorae are the individual virtual machines, containers, or bare metal servers that accomplish the delivery of load balancing services to tenant application environments.

controller

The controller is the brains of Octavia. It consists of five sub-components as individual daemons. They can be run on separate back-end infrastructure.

  • The API Controller is a subcomponent that runs Octavia’s API. It takes API requests, performs simple sanitizing on them, and ships them off to the controller worker over the Oslo messaging bus.

  • The controller worker subcomponent takes sanitized API commands from the API controller and performs the actions necessary to fulfill the API request.

  • The health manager subcomponent monitors individual amphorae to ensure they are up and running, and healthy. It also handles failover events if amphorae fail unexpectedly.

  • The housekeeping manager subcomponent cleans up stale (deleted) database records, manages the spares pool, and manages amphora certificate rotation.

  • The driver agent subcomponent receives status and statistics updates from provider drivers.

network

Octavia cannot accomplish what it does without manipulating the network environment. Amphorae are spun up with a network interface on the load balancer network. They can also plug directly into tenant networks to reach back-end pool members, depending on how any given load balancing service is deployed by the tenant.

The OpenStack Octavia team has created a glossary of terms used within the context of the Octavia project and Neutron LBaaS version 2. This glossary is available here: Octavia Glossary.

In accomplishing its role, Octavia requires OpenStack services managed by other barclamps to be already deployed:

  • Nova - For managing amphora lifecycle and spinning up compute resources on demand.

  • Neutron - For network connectivity between amphorae, tenant environments, and external networks.

  • Barbican - For managing TLS certificates and credentials, when TLS session termination is configured on the amphorae.

  • Keystone - For authentication against the Octavia API, and for Octavia to authenticate with other OpenStack projects.

  • Glance - For storing the amphora virtual machine image.

The Octavia barclamp component consists of following roles:

octavia-api

The Octavia API.

octavia-backend

Octavia worker, health-manager and house-keeping.

12.20.1 Prerequisites

Before configuring and applying the Octavia barclamp, there are a couple of prerequisites that have to be prepared: the Neutron management network used by the Octavia control plane services to communicate with Amphorae and the certificates needed to secure this communication.

12.20.1.1 Management network

Octavia needs a neutron provider network as a management network that the controller uses to communicate with the amphorae. The amphorae that Octavia deploys have interfaces and IP addresses on this network. It’s important that the subnet deployed on this network be sufficiently large to allow for the maximum number of amphorae and controllers likely to be deployed throughout the lifespan of the cloud installation.

To configure the Octavia management network, the network configuration must be initialized or updated to include an octavia network entry. The Octavia barclamp uses this information to automatically create the neutron provider network used for management traffic.

  1. If you have not yet deployed Crowbar, add the following configuration to /etc/crowbar/network.json to set up the Octavia management network, using the applicable VLAN ID, and network address values. If you have already deployed Crowbar, then add this configuration to the Raw view of the Network Barclamp.

    "octavia": {
                  "conduit": "intf1",
                  "vlan": 450,
                  "use_vlan": true,
                  "add_bridge": false,
                  "subnet": "172.31.0.0",
                  "netmask": "255.255.0.0",
                  "broadcast": "172.31.255.255",
                  "ranges": {
                    "host": { "start": "172.31.0.1",
                       "end": "172.31.0.255" },
                    "dhcp": { "start": "172.31.1.1",
                       "end": "172.31.255.254" }
                  }
            },
    Important
    Important

    Care should be taken to ensure the IP subnet doesn't overlap with any of those configured for the other networks. The chosen VLAN ID must not be used within the SUSE OpenStack Cloud network and not used by neutron (i.e. if deploying neutron with VLAN support - using the plugins linuxbridge or openvswitch plus VLAN - ensure that the VLAN ID doesn't overlap with the range of VLAN IDs allocated for the nova-fixed neutron network).

    The host range will be used to allocate IP addresses to the controller nodes where Octavia services are running, so it needs to accommodate the maximum number of controller nodes likely to be deployed throughout the lifespan of the cloud installation.

    The dhcp range will be reflected in the configuration of the actual neutron provider network used for Octavia management traffic and its size will determine the maximum number of amphorae and therefore the maximum number of load balancer instances that can be running at the same time.

    See Section 7.5, “Custom Network Configuration” for detailed instructions on how to customize the network configuration.

  2. If Crowbar is already deployed, it is also necessary to re-apply both the neutron Barclamp and the nova Barclamp for the configuration to take effect before applying the Octavia Barclamp.

Aside from configuring the physical switches to allow VLAN traffic to be correctly forwarded, no additional external network configuration is required.

12.20.1.2 Certificates

Important
Important

Crowbar will automatically change the filesystem ownership settings for these files to match the username and group used by the Octavia services, but it is otherwise the responsibility of the cloud administrator to ensure that access to these files on the controller nodes is properly restricted.

Octavia administrators set up certificate authorities for the two-way TLS authentication used in Octavia for command and control of amphorae. For more information, see the Creating the Certificate Authorities section of the Octavia Certificate Configuration Guide . Note that the Configuring Octavia section of that guide does not apply as the barclamp will configure Octavia.

The following certificates need to be generated and stored on all controller nodes where Octavia is deployed under /etc/octavia/certs, in a relative path matching the certificate location attribute values configured in the Octavia barclamp:

Server CA certificate

The Certificate Authority (CA) certificate that is used by the Octavia controller(s) to sign the generated Amphora server certificates. The Octavia control plane services also validate the server certificates presented by Amphorae during the TLS handshake against this CA certificate.

Server CA key

The private key associated with the server CA certificate. This key must be encrypted with a non-empty passphrase that also needs to be provided as a separate barclamp attribute. The private key is required alongside the server CA certificate on the Octavia controller(s), to sign the generated Amphora server certificates.

Passphrase

The passphrase used to encrypt the server CA key.

Client CA certificate

The CA certificate used to sign the client certificates installed on the Octavia controller nodes and presented by Octavia control plane services during the TLS handshake. This CA certificate is stored on the Amphorae, which use it to validate the client certificate presented by the Octavia control plane services during the TLS handshake. The same CA certificate may be used for both client and server roles, but this is perceived as a security weakness and recommended against, as a server certificate from an amphora could be used to impersonate a controller.

Client certificate concat key

The client certificate, signed with the client CA certificate, bundled together with the client certificate key, that is presented by the Octavia control plane services during the TLS handshake.

All Octavia barclamp attributes listed above, with the exception of the pasphrase are paths relative to /etc/octavia/certs. The required certificates must be present in their corresponding locations on all controller nodes where the Octavia barclamp will be deployed.

12.20.2 Barclmap raw mode

If a user wants to be able to debug or get access to an amphora, you can provide an SSH keyname to the barclamp via the raw mode. This is a keyname to a key that has been uploaded to openstack. For example:

      openstack keypair create --public-key /etc/octavia/.ssh/id_rsa_amphora.pub octavia_key

Note that the keypair has to be owned by the octavia user.

12.20.3 Migrating Users to Octavia

Important
Important

This behaviour is not backwards compatible with the legacy Neutron LBaaS API policy, as non-admin OpenStack users will not be allowed to run openstack loadbalancer CLI commands or use the load balancer horizon dashboard unless their accounts are explicitly reconfigured to be associated with one or more of these roles.

Important
Important

Please follow the instructions documented under Section 12.13.1, “Enabling Identity Trusts Authorization (Optional)” on updating the trusts roles in the heat barclamp configuration. This is required to configure heat to use the correct roles when communicating with the Octavia API and manage load balancers.

Octavia employs a set of specialized roles to control access to the load balancer API:

load-balancer_observer

User has access to load-balancer read-only APIs.

load-balancer_global_observer

User has access to load-balancer read-only APIs including resources owned by others.

load-balancer_member

User has access to load-balancer read and write APIs.

load-balancer_quota_admin

User is considered an admin for quota APIs only.

load-balancer_admin

User is considered an admin for all load-balancer APIs including resources owned by others.

12.20.4 Migrating Neutron LBaaS Instances to Octavia

Important
Important

Disabling LBaaS or switching the LBaaS provider in the Neutron barclamp to Octavia is not possible while there are load balancers still running under the previous Neutron LBaaS provider and will result in a Neutron barclamp redeployment failure. To avoid this, ensure that load balancer instances that are running under the old provider are either migrated or deleted.

The migration procedure documented in this section is only relevant if LBaaS was already enabled in the Neutron barclamp, with either the HAProxy or H5 provider configured, before Octavia was deployed. The procedure should be followed by operators to migrate and/or delete all load balancer instances using the Neutron LBaaS provider that are still active, and concluded the switch to Octavia by reconfiguring or disabling the deprecated Neutron LBaaS feature.

Octavia is a replacement for the Neutron LBaaS feature, that is deprecated in the SUSE OpenStack Cloud Crowbar 9 release. However, deploying the Octavia barclamp does not automatically disable the legacy Neutron LBaaS provider, if one is already configured in the Neutron barclamp.

Both Octavia and Neutron LBaaS need to be enabled at the same time, to facilitate the load balancer migration process. This way, operators have a migration path they can use to gradually decommission Neutron LBaaS load balancers that use the HAProxy or F5 provider and replace them with Octavia load balancers.

With Octavia deployed and Neutron LBaaS enabled, both load balancer providers can be used simultaneously:

  • The (deprecated) neutron lbaas-... CLI commands can be used to manage load balancer instances using the legacy Neutron LBaaS provider configured in the Neutron barclamp. Note that the legacy Neutron LBaaS instances will not be visible in the load balancer horizon dashboard.

  • The openstack loadbalancer CLI commands as well as the load balancer horizon dashboard can be used to manage Octavia load balancers. Also note that OpenStack users are required to have special roles associated with their projects to be able to access the Octavia API, as covered in Section 12.20.3, “Migrating Users to Octavia”.

Note
Note

(Optional): to prevent regular users from creating or changing the configuration of currently running legacy Neutron LBaaS load balancer instances during the migration process, the neutron API policy should be temporarily changed to prevent these operations. For this purpose, a neutron-lbaas.json file can be created in the /etc/neutron/policy.d folder on all neutron-server nodes (no service restart required):

mkdir /etc/neutron/policy.d
cat > /etc/neutron/policy.d/neutron-lbaas.json <<EOF
{
  "context_is_admin": "role:admin",
  "context_is_advsvc": "role:advsvc",
  "default": "rule:admin_or_owner",
  "create_loadbalancer": "rule:admin_only",
  "update_loadbalancer": "rule:admin_only",
  "get_loadbalancer": "!",
  "delete_loadbalancer": "rule:admin_only",
  "create_listener": "rule:admin_only",
  "get_listener": "",
  "delete_listener": "rule:admin_only",
  "update_listener": "rule:admin_only",
  "create_pool": "rule:admin_only",
  "get_pool": "",
  "delete_pool": "rule:admin_only",
  "update_pool": "rule:admin_only",
  "create_healthmonitor": "rule:admin_only",
  "get_healthmonitor": "",
  "update_healthmonitor": "rule:admin_only",
  "delete_healthmonitor": "rule:admin_only",
  "create_pool_member": "rule:admin_only",
  "get_pool_member": "",
  "update_pool_member": "rule:admin_only",
  "delete_pool_member": "rule:admin_only"
}
EOF
chown -R root:neutron /etc/neutron/policy.d
chmod 640 /etc/neutron/policy.d/neutron-lbaas.json

If users need to create or change the configuration of currently running legacy Neutron LBaaS load balancer instances during the migration process, Create a neutron-lbaas.json file in the /etc/neutron/policy.d folder on all neutron-server nodes. The neutron-lbaas.json file should be empty, then restart the neutron service via systemctl restart openstack-neutron.service on all neutron-server nodes.

With all of the above in check, the actual migration process consists of replacing Neutron LBaaS instances with Octavia instances. There are many different ways to accomplish this, depending on the size and purpose of the cloud deployment, the number of load balancers that need to be migrated, the project and user configuration etc. This section only gives a few pointers and recommendations on how to approach this tasks, but the actual execution needs to be attuned to each particular situation.

Migrating a single load balancer instance is generally comprised of these steps:

  • Use the neutron lbaas-... CLI to retrieve information about the load balancer configuration, including the complete set of related listener, pool, member and health monitor instances

  • Use the openstack loadbalancer CLI or the load balancer horizon dashboard to create an Octavia load balancer and its associated listener, pool, member and health monitor instances to accurately match the project and Neutron LBaaS load balancer configuration extracted during the previous step. Note that the Octavia load balancer instance and the Neutron LBaaS instance cannot share the same VIP address value if both instances are running at the same time. This could be a problem, if the load balancer VIP address is accessed directly (i.e. as opposed to being accessed via a floating IP). In this case, the legacy load balancer instance needs to be deleted first, which incurs a longer interruption in service availability.

  • Once the Octavia instance is up and running, if a floating IP is associated with the Neutron LBaaS load balancer VIP address, re-associate the floating IP with the Octavia load balancer VIP address. Using a floating IP has the advantage that the migration can be performed with minimal downtime. If the load balancer VIP address needs to be accessed directly (e.g. from another VM attached to the same Neutron network or router), then all the remote affected services need to be reconfigured to use the new VIP address.

  • The two load balancer instances can continue to run in parallel, while the operator or owner verifies the Octavia load balancer operation. If any problems occur, the change can be reverted by undoing the actions performed during the previous step. If a floating IP is involved, this could be as simple as switching it back to the Neutron LBaaS load balancer instance.

  • When it's safe, delete the Neutron LBaaS load balancer instance, along with all its related listener, pool, member and health monitor instances.

Depending on the number of load balancer instances that need to be migrated and the complexity of the overall setup that they are integrated into, the migration may be performed by the cloud operators, the owners themselves, or a combination of both. It is generally recommended that the load balancer owners have some involvement in this process or at least be notified of this migration procedure, because the load balancer migration is not an entirely seamless operation. One or more of the load balancer configuration attributes listed below may change during the migration and there may be other operational components, managed by OpenStack or otherwise (e.g. OpenStack heat stacks, configuration management scripts, database entries or non-persistent application states, etc.), that only the owner(s) may be aware of:

  • The load balancer UUID value, along with the UUID values of every other related object (listeners, pools, members etc.). Even though the name values may be preserved by the migration, the UUID values will be different.

  • The load balancer VIP address will change during a non-disruptive migration. This is especially relevant if there is no floating IP associated with the previous VIP address.

When the load balancer migration is complete, the Neutron LBaaS provider can either be switched to Octavia or turned off entirely in the Neutron barclamp, to finalize the migration process.

The only advantage of having Octavia configured as the Neutron LBaaS provider is that it continues to allow users to manage Octavia load balancers via the deprecated neutron lbaas-... CLI, but it is otherwise recommended to disable LBaaS in the Neutron barclamp.

12.21 Deploying ironic (optional)

Ironic is the OpenStack bare metal service for provisioning physical machines. Refer to the OpenStack developer and admin manual for information on drivers, and administering ironic.

Deploying the ironic barclamp is done in five steps:

  • Set options in the Custom view of the barclamp.

  • List the enabled_drivers in the Raw view.

  • Configure the ironic network in network.json.

  • Apply the barclamp to a Control Node.

  • Apply the nova-compute-ironic role to the same node you applied the ironic barclamp to, in place of the other nova-compute-* roles.

12.21.1 Custom View Options

Currently, there are two options in the Custom view of the barclamp.

Enable automated node cleaning

Node cleaning prepares the node to accept a new workload. When you set this to true, ironic collects a list of cleaning steps from the Power, Deploy, Management, and RAID interfaces of the driver assigned to the node. ironic automatically prioritizes and executes the cleaning steps, and changes the state of the node to "cleaning". When cleaning is complete the state becomes "available". After a new workload is assigned to the machine its state changes to "active".

false disables automatic cleaning, and you must configure and apply node cleaning manually. This requires the admin to create and prioritize the cleaning steps, and to set up a cleaning network. Apply manual cleaning when you have long-running or destructive tasks that you wish to monitor and control more closely. (See Node Cleaning.)

SSL Support: Protocol

SSL support is not yet enabled, so the only option is HTTP.

The ironic barclamp Custom view
Figure 12.32: The ironic barclamp Custom view

12.21.2 ironic Drivers

You must enter the Raw view of barclamp and specify a list of drivers to load during service initialization. pxe_ipmitool is the recommended default ironic driver. It uses the Intelligent Platform Management Interface (IPMI) to control the power state of your bare metal machines, creates the appropriate PXE configurations to start them, and then performs the steps to provision and configure the machines.

"enabled_drivers": ["pxe_ipmitool"],

See ironic Drivers for more information.

12.21.3 Example ironic Network Configuration

This is a complete ironic network.json example, using the default network.json, followed by a diff that shows the ironic-specific configurations.

Example 12.1: Example network.json
{
  "start_up_delay": 30,
  "enable_rx_offloading": true,
  "enable_tx_offloading": true,
  "mode": "single",
  "teaming": {
    "mode": 1
  },
  "interface_map": [
    {
      "bus_order": [
        "0000:00/0000:00:01",
        "0000:00/0000:00:03"
      ],
      "pattern": "PowerEdge R610"
    },
    {
      "bus_order": [
        "0000:00/0000:00:01.1/0000:01:00.0",
        "0000:00/0000:00:01.1/0000.01:00.1",
        "0000:00/0000:00:01.0/0000:02:00.0",
        "0000:00/0000:00:01.0/0000:02:00.1"
      ],
      "pattern": "PowerEdge R620"
    },
    {
      "bus_order": [
        "0000:00/0000:00:01",
        "0000:00/0000:00:03"
      ],
      "pattern": "PowerEdge R710"
    },
    {
      "bus_order": [
        "0000:00/0000:00:04",
        "0000:00/0000:00:02"
      ],
      "pattern": "PowerEdge C6145"
    },
    {
      "bus_order": [
        "0000:00/0000:00:03.0/0000:01:00.0",
        "0000:00/0000:00:03.0/0000:01:00.1",
        "0000:00/0000:00:1c.4/0000:06:00.0",
        "0000:00/0000:00:1c.4/0000:06:00.1"
      ],
      "pattern": "PowerEdge R730xd"
    },
    {
      "bus_order": [
        "0000:00/0000:00:1c",
        "0000:00/0000:00:07",
        "0000:00/0000:00:09",
        "0000:00/0000:00:01"
      ],
      "pattern": "PowerEdge C2100"
    },
    {
      "bus_order": [
        "0000:00/0000:00:01",
        "0000:00/0000:00:03",
        "0000:00/0000:00:07"
      ],
      "pattern": "C6100"
    },
    {
      "bus_order": [
        "0000:00/0000:00:01",
        "0000:00/0000:00:02"
      ],
      "pattern": "product"
    }
  ],
  "conduit_map": [
    {
      "conduit_list": {
        "intf0": {
          "if_list": [
            "1g1",
            "1g2"
          ]
        },
        "intf1": {
          "if_list": [
            "1g1",
            "1g2"
          ]
        },
        "intf2": {
          "if_list": [
            "1g1",
            "1g2"
          ]
        },
        "intf3": {
          "if_list": [
            "1g1",
            "1g2"
          ]
        }
      },
      "pattern": "team/.*/.*"
    },
    {
      "conduit_list": {
        "intf0": {
          "if_list": [
            "?1g1"
          ]
        },
        "intf1": {
          "if_list": [
            "?1g2"
          ]
        },
        "intf2": {
          "if_list": [
            "?1g1"
          ]
        },
        "intf3": {
          "if_list": [
            "?1g2"
          ]
        }
      },
      "pattern": "dual/.*/.*"
    },
    {
      "conduit_list": {
        "intf0": {
          "if_list": [
            "?1g1"
          ]
        },
        "intf1": {
          "if_list": [
            "?1g1"
          ]
        },
        "intf2": {
          "if_list": [
            "?1g1"
          ]
        },
        "intf3": {
          "if_list": [
            "?1g2"
          ]
        }
      },
      "pattern": "single/.*/.*ironic.*"
    },
    {
      "conduit_list": {
        "intf0": {
          "if_list": [
            "?1g1"
          ]
        },
        "intf1": {
          "if_list": [
            "?1g1"
          ]
        },
        "intf2": {
          "if_list": [
            "?1g1"
          ]
        },
        "intf3": {
          "if_list": [
            "?1g1"
          ]
        }
      },
      "pattern": "single/.*/.*"
    },
    {
      "conduit_list": {
        "intf0": {
          "if_list": [
            "?1g1"
          ]
        },
        "intf1": {
          "if_list": [
            "1g1"
          ]
        },
        "intf2": {
          "if_list": [
            "1g1"
          ]
        },
        "intf3": {
          "if_list": [
            "1g1"
          ]
        }
      },
      "pattern": ".*/.*/.*"
    },
    {
      "conduit_list": {
        "intf0": {
          "if_list": [
            "1g1"
          ]
        },
        "intf1": {
          "if_list": [
            "?1g1"
          ]
        },
        "intf2": {
          "if_list": [
            "?1g1"
          ]
        },
        "intf3": {
          "if_list": [
            "?1g1"
          ]
        }
      },
      "pattern": "mode/1g_adpt_count/role"
    }
  ],
  "networks": {
    "ironic": {
      "conduit": "intf3",
      "vlan": 100,
      "use_vlan": false,
      "add_bridge": false,
      "add_ovs_bridge": false,
      "bridge_name": "br-ironic",
      "subnet": "192.168.128.0",
      "netmask": "255.255.255.0",
      "broadcast": "192.168.128.255",
      "router": "192.168.128.1",
      "router_pref": 50,
      "ranges": {
        "admin": {
          "start": "192.168.128.10",
          "end": "192.168.128.11"
        },
        "dhcp": {
          "start": "192.168.128.21",
          "end": "192.168.128.254"
        }
      },
      "mtu": 1500
    },
    "storage": {
      "conduit": "intf1",
      "vlan": 200,
      "use_vlan": true,
      "add_bridge": false,
      "mtu": 1500,
      "subnet": "192.168.125.0",
      "netmask": "255.255.255.0",
      "broadcast": "192.168.125.255",
      "ranges": {
        "host": {
          "start": "192.168.125.10",
          "end": "192.168.125.239"
        }
      }
    },
    "public": {
      "conduit": "intf1",
      "vlan": 300,
      "use_vlan": true,
      "add_bridge": false,
      "subnet": "192.168.122.0",
      "netmask": "255.255.255.0",
      "broadcast": "192.168.122.255",
      "router": "192.168.122.1",
      "router_pref": 5,
      "ranges": {
        "host": {
          "start": "192.168.122.2",
          "end": "192.168.122.127"
        }
      },
      "mtu": 1500
    },
    "nova_fixed": {
      "conduit": "intf1",
      "vlan": 500,
      "use_vlan": true,
      "add_bridge": false,
      "add_ovs_bridge": false,
      "bridge_name": "br-fixed",
      "subnet": "192.168.123.0",
      "netmask": "255.255.255.0",
      "broadcast": "192.168.123.255",
      "router": "192.168.123.1",
      "router_pref": 20,
      "ranges": {
        "dhcp": {
          "start": "192.168.123.1",
          "end": "192.168.123.254"
        }
      },
      "mtu": 1500
    },
    "nova_floating": {
      "conduit": "intf1",
      "vlan": 300,
      "use_vlan": true,
      "add_bridge": false,
      "add_ovs_bridge": false,
      "bridge_name": "br-public",
      "subnet": "192.168.122.128",
      "netmask": "255.255.255.128",
      "broadcast": "192.168.122.255",
      "ranges": {
        "host": {
          "start": "192.168.122.129",
          "end": "192.168.122.254"
        }
      },
      "mtu": 1500
    },
    "bmc": {
      "conduit": "bmc",
      "vlan": 100,
      "use_vlan": false,
      "add_bridge": false,
      "subnet": "192.168.124.0",
      "netmask": "255.255.255.0",
      "broadcast": "192.168.124.255",
      "ranges": {
        "host": {
          "start": "192.168.124.162",
          "end": "192.168.124.240"
        }
      },
      "router": "192.168.124.1"
    },
    "bmc_vlan": {
      "conduit": "intf2",
      "vlan": 100,
      "use_vlan": true,
      "add_bridge": false,
      "subnet": "192.168.124.0",
      "netmask": "255.255.255.0",
      "broadcast": "192.168.124.255",
      "ranges": {
        "host": {
          "start": "192.168.124.161",
          "end": "192.168.124.161"
        }
      }
    },
    "os_sdn": {
      "conduit": "intf1",
      "vlan": 400,
      "use_vlan": true,
      "add_bridge": false,
      "mtu": 1500,
      "subnet": "192.168.130.0",
      "netmask": "255.255.255.0",
      "broadcast": "192.168.130.255",
      "ranges": {
        "host": {
          "start": "192.168.130.10",
          "end": "192.168.130.254"
        }
      }
    },
    "admin": {
      "conduit": "intf0",
      "vlan": 100,
      "use_vlan": false,
      "add_bridge": false,
      "mtu": 1500,
      "subnet": "192.168.124.0",
      "netmask": "255.255.255.0",
      "broadcast": "192.168.124.255",
      "router": "192.168.124.1",
      "router_pref": 10,
      "ranges": {
        "admin": {
          "start": "192.168.124.10",
          "end": "192.168.124.11"
        },
        "dhcp": {
          "start": "192.168.124.21",
          "end": "192.168.124.80"
        },
        "host": {
          "start": "192.168.124.81",
          "end": "192.168.124.160"
        },
        "switch": {
          "start": "192.168.124.241",
          "end": "192.168.124.250"
        }
      }
    }
  }
}
Example 12.2: Diff of ironic Configuration

This diff should help you separate the ironic items from the default network.json.

--- network.json        2017-06-07 09:22:38.614557114 +0200
+++ ironic_network.json 2017-06-05 12:01:15.927028019 +0200
@@ -91,6 +91,12 @@
             "1g1",
             "1g2"
           ]
+        },
+        "intf3": {
+          "if_list": [
+            "1g1",
+            "1g2"
+          ]
         }
       },
       "pattern": "team/.*/.*"
@@ -111,6 +117,11 @@
           "if_list": [
             "?1g1"
           ]
+        },
+        "intf3": {
+          "if_list": [
+            "?1g2"
+          ]
         }
       },
       "pattern": "dual/.*/.*"
@@ -131,6 +142,36 @@
           "if_list": [
             "?1g1"
           ]
+        },
+        "intf3": {
+          "if_list": [
+            "?1g2"
+          ]
+        }
+      },
+      "pattern": "single/.*/.*ironic.*"
+    },
+    {
+      "conduit_list": {
+        "intf0": {
+          "if_list": [
+            "?1g1"
+          ]
+        },
+        "intf1": {
+          "if_list": [
+            "?1g1"
+          ]
+        },
+        "intf2": {
+          "if_list": [
+            "?1g1"
+          ]
+        },
+        "intf3": {
+          "if_list": [
+            "?1g1"
+          ]
         }
       },
       "pattern": "single/.*/.*"
@@ -151,6 +192,11 @@
           "if_list": [
             "1g1"
           ]
+        },
+        "intf3": {
+          "if_list": [
+            "1g1"
+          ]
         }
       },
       "pattern": ".*/.*/.*"
@@ -171,12 +217,41 @@
           "if_list": [
             "?1g1"
           ]
+        },
+        "intf3": {
+          "if_list": [
+            "?1g1"
+          ]
         }
       },
       "pattern": "mode/1g_adpt_count/role"
     }
   ],
   "networks": {
+    "ironic": {
+      "conduit": "intf3",
+      "vlan": 100,
+      "use_vlan": false,
+      "add_bridge": false,
+      "add_ovs_bridge": false,
+      "bridge_name": "br-ironic",
+      "subnet": "192.168.128.0",
+      "netmask": "255.255.255.0",
+      "broadcast": "192.168.128.255",
+      "router": "192.168.128.1",
+      "router_pref": 50,
+      "ranges": {
+        "admin": {
+          "start": "192.168.128.10",
+          "end": "192.168.128.11"
+        },
+        "dhcp": {
+          "start": "192.168.128.21",
+          "end": "192.168.128.254"
+        }
+      },
+      "mtu": 1500
+    },
     "storage": {
       "conduit": "intf1",
       "vlan": 200,

12.22 How to Proceed

With a successful deployment of the OpenStack Dashboard, the SUSE OpenStack Cloud Crowbar installation is finished. To be able to test your setup by starting an instance one last step remains to be done—uploading an image to the glance component. Refer to the Supplement to Administrator Guide and User Guide, chapter Manage images for instructions. Images for SUSE OpenStack Cloud can be built in SUSE Studio. Refer to the Supplement to Administrator Guide and User Guide, section Building Images with SUSE Studio.

Now you can hand over to the cloud administrator to set up users, roles, flavors, etc.—refer to the Administrator Guide for details. The default credentials for the OpenStack Dashboard are user name admin and password crowbar.

12.23 SUSE Enterprise Storage integration

SUSE OpenStack Cloud Crowbar supports integration with SUSE Enterprise Storage (SES), enabling Ceph block storage as well as image storage services in SUSE OpenStack Cloud.

Enabling SES Integration

To enable SES integration on Crowbar, an SES configuration file must be uploaded to Crowbar. SES integration functionality is included in the crowbar-core package and can be used with the Crowbar UI or CLI (crowbarctl). The SES configuration file describes various aspects of the Ceph environment, and keyrings for each user and pool created in the Ceph environment for SUSE OpenStack Cloud Crowbar services.

SES 7 Configuration

The following instructions detail integrating SUSE Enterprise Storage 7.0 with SUSE OpenStack Cloud.

  1. Create the osd pools on the SUSE Enterprise Storage admin node (the names provided here are examples)

    ceph osd pool create ses-cloud-volumes 16 && \
    ceph osd pool create ses-cloud-backups 16 && \
    ceph osd pool create ses-cloud-images 16 &&\
    ceph osd pool create ses-cloud-vms 16
  2. Enable the osd pools

    ceph osd pool application enable ses-cloud-volumes rbd && \
    ceph osd pool application enable ses-cloud-backups rbd && \
    ceph osd pool application enable ses-cloud-images rbd && \
    ceph osd pool application enable ses-cloud-vms rbd
  3. Configure permissions on the SUSE OpenStack Cloud Crowbar admin node

    ceph-authtool -C /etc/ceph/ceph.client.ses-cinder.keyring --name client.ses-cinder --add-key $(ceph-authtool --gen-print-key) --cap mon "allow r" --cap osd "allow class-read object_prefix rbd_children, allow rwx pool=ses-cloud-volumes, allow rwx pool=ses-cloud-vms, allow rwx pool=ses-cloud-images"
    ceph-authtool -C /etc/ceph/ceph.client.ses-cinder-backup.keyring --name client.ses-cinder-backup --add-key $(ceph-authtool --gen-print-key) --cap mon "allow r" --cap osd "allow class-read object_prefix rbd_children, allow rwx pool=ses-cloud-cinder-backups"
    ceph-authtool -C /etc/ceph/ceph.client.ses-glance.keyring --name client.ses-glance --add-key $(ceph-authtool --gen-print-key) --cap mon "allow r" --cap osd "allow class-read object_prefix rbd_children, allow rwx pool=ses-cloud-images"
  4. Import the updated keyrings into Ceph

    ceph auth import -i /etc/ceph/ceph.client.ses-cinder-backup.keyring && \
    ceph auth import -i /etc/ceph/ceph.client.ses-cinder.keyring && \
    ceph auth import -i /etc/ceph/ceph.client.ses-glance.keyring

SES 6, 5.5, 5 Configuration

For SES deployments that are version 5.5 or 6, a Salt runner is used to create all the users and pools. It also generates a YAML configuration that is needed to integrate with SUSE OpenStack Cloud. The integration runner creates separate users for cinder, cinder backup (not used by Crowbar currently) and glance. Both the cinder and nova services have the same user, because cinder needs access to create objects that nova uses.

Important
Important

Support for SUSE Enterprise Storage 5 and 5.5 is deprecated. The documentation for integrating these versions is included for customers who may not yet have upgraded to newer versions of SUSE Enterprise Storage . These versions are no longer officially supported.

Configure SES 6, 5.5, or 5 with the following steps:

  1. Login as root and run the SES 5.5 Salt runner on the Salt admin host.

    root # salt-run --out=yaml openstack.integrate prefix=mycloud

    The prefix parameter allows pools to be created with the specified prefix. By using different prefix parameters, multiple cloud deployments can support different users and pools on the same SES deployment.

  2. YAML output is created with content similar to the following example, and can be redirected to a file using the redirect operator > or using the additional parameter --out-file=<filename>:

    ceph_conf:
         cluster_network: 10.84.56.0/21
         fsid: d5d7c7cb-5858-3218-a36f-d028df7b0673
         mon_host: 10.84.56.8, 10.84.56.9, 10.84.56.7
         mon_initial_members: ses-osd1, ses-osd2, ses-osd3
         public_network: 10.84.56.0/21
    cinder:
         key: ABCDEFGaxefEMxAAW4zp2My/5HjoST2Y87654321==
         rbd_store_pool: mycloud-cinder
         rbd_store_user: cinder
    cinder-backup:
         key: AQBb8hdbrY2bNRAAqJC2ZzR5Q4yrionh7V5PkQ==
         rbd_store_pool: mycloud-backups
         rbd_store_user: cinder-backup
    glance:
         key: AQD9eYRachg1NxAAiT6Hw/xYDA1vwSWLItLpgA==
         rbd_store_pool: mycloud-glance
         rbd_store_user: glance
    nova:
         rbd_store_pool: mycloud-nova
    radosgw_urls:
         - http://10.84.56.7:80/swift/v1
         - http://10.84.56.8:80/swift/v1
  3. Upload the generated YAML file to Crowbar using the UI or crowbarctl CLI.

  4. If the Salt runner is not available, you must manually create pools and users to allow SUSE OpenStack Cloud services to use the SES/Ceph cluster. Pools and users must be created for cinder, nova, and glance. Instructions for creating and managing pools, users and keyrings can be found in the SUSE Enterprise Storage Administration Guide in the Key Management section.

    After the required pools and users are set up on the SUSE Enterprise Storage/Ceph cluster, create an SES configuration file in YAML format (using the example template above). Upload this file to Crowbar using the UI or crowbarctl CLI.

  5. As indicated above, the SES configuration file can be uploaded to Crowbar using the UI or crowbarctl CLI.

    • From the main Crowbar UI, the upload page is under Utilities › SUSE Enterprise Storage.

      If a configuration is already stored in Crowbar, it will be visible in the upload page. A newly uploaded configuration will replace existing one. The new configuration will be applied to the cloud on the next chef-client run. There is no need to reapply proposals.

      Configurations can also be deleted from Crowbar. After deleting a configuration, you must manually update and reapply all proposals that used SES integration.

    • With the crowbarctl CLI, the command crowbarctl ses upload FILE accepts a path to the SES configuration file.

Cloud Service Configuration

SES integration with SUSE OpenStack Cloud services is implemented with relevant Barclamps and installed with the crowbar-openstack package.

glance

Set Use SES Configuration to true under RADOS Store Parameters. The glance barclamp pulls the uploaded SES configuration from Crowbar when applying the glance proposal and on chef-client runs. If the SES configuration is uploaded before the glance proposal is created, Use SES Configuration is enabled automatically upon proposal creation.

cinder

Create a new RADOS backend and set Use SES Configuration to true. The cinder barclamp pulls the uploaded SES configuration from Crowbar when applying the cinder proposal and on chef-client runs. If the SES configuration was uploaded before the cinder proposal was created, a ses-ceph RADOS backend is created automatically on proposal creation with Use SES Configuration already enabled.

nova

To connect with volumes stores in SES, nova uses the configuration from the cinder barclamp. For ephemeral storage, nova re-uses the rbd_store_user and key from cinder but has a separate rbd_store_pool defined in the SES configuration. Ephemeral storage on SES can be enabled or disabled by setting Use Ceph RBD Ephemeral Backend in nova proposal. In new deployments it is enabled by default. In existing ones it is disabled for compatibility reasons.

RADOS Gateway Integration

Besides block storage, the SES cluster can also be used as a swift replacement for object storage. If radosgw_urls section is present in uploaded SES configuration, first of the URLs is registered in the keystone catalog as the "swift"/"object-store" service. Some configuration is needed on SES side to fully integrate with keystone auth. If SES integration is enabled on a cloud with swift deployed, SES object storage service will get higher priority by default. To override this and use swift for object storage instead, remove radosgw_urls section from the SES configuration file and re-upload it to Crowbar. Re-apply swift proposal or wait for next periodic chef-client run to make changes effective.

12.24 Roles and Services in SUSE OpenStack Cloud Crowbar

The following table lists all roles (as defined in the barclamps), and their associated services. As of SUSE OpenStack Cloud Crowbar 8, this list is work in progress. Services can be manually started and stopped with the commands systemctl start SERVICE and systemctl stop SERVICE.

Role

Service

ceilometer-agent

openstack-ceilometer-agent-compute

ceilometer-central

ceilometer-server

ceilometer-swift-proxy-middleware

openstack-ceilometer-agent-notification

openstack-ceilometer-agent-central

cinder-controller

openstack-cinder-api

openstack-cinder-scheduler

cinder-volume

openstack-cinder-volume

database-server

postgresql

glance-server

openstack-glance-api

openstack-glance-registry

heat-server

openstack-heat-api-cfn

openstack-heat-api-cloudwatch

openstack-heat-api

openstack-heat-engine

horizon

apache2

keystone-server

openstack-keystone

manila-server

openstack-manila-api

openstack-manila-scheduler

manila-share

openstack-manila-share

neutron-server

openstack-neutron

nova-compute-*

openstack-nova-compute

openstack-neutron-openvswitch-agent (when neutron is deployed with openvswitch)

nova-controller

openstack-nova-api

openstack-nova-cert

openstack-nova-conductor

openstack-nova-novncproxy

openstack-nova-objectstore

openstack-nova-scheduler

rabbitmq-server

rabbitmq-server

swift-dispersion

none

swift-proxy

openstack-swift-proxy

swift-ring-compute

none

swift-storage

openstack-swift-account-auditor

openstack-swift-account-reaper

openstack-swift-account-replicator

openstack-swift-account

openstack-swift-container-auditor

openstack-swift-container-replicator

openstack-swift-container-sync

openstack-swift-container-updater

openstack-swift-container

openstack-swift-object-auditor

openstack-swift-object-expirer

openstack-swift-object-replicator

openstack-swift-object-updater

openstack-swift-object

12.25 Crowbar Batch Command

This is the documentation for the crowbar batch subcommand.

crowbar batch provides a quick way of creating, updating, and applying Crowbar proposals. It can be used to:

  • Accurately capture the configuration of an existing Crowbar environment.

  • Drive Crowbar to build a complete new environment from scratch.

  • Capture one SUSE OpenStack Cloud environment and then reproduce it on another set of hardware (provided hardware and network configuration match to an appropriate extent).

  • Automatically update existing proposals.

As the name suggests, crowbar batch is intended to be run in batch mode that is mostly unattended. It has two modes of operation:

crowbar batch export

Exports a YAML file which describes existing proposals and how their parameters deviate from the default proposal values for that barclamp.

crowbar batch build

Imports a YAML file in the same format as above. Uses it to build new proposals if they do not yet exist. Updates the existing proposals so that their parameters match those given in the YAML file.

12.25.1 YAML file format

Here is an example YAML file. At the top-level there is a proposals array, each entry of which is a hash representing a proposal:

proposals:
- barclamp: provisioner
  # Proposal name defaults to 'default'.
  attributes:
    shell_prompt: USER@ALIAS:CWD SUFFIX
- barclamp: database
  # Default attributes are good enough, so we just need to assign
  # nodes to roles:
  deployment:
    elements:
      database-server:
        - "@@controller1@@"
- barclamp: rabbitmq
  deployment:
    elements:
      rabbitmq-server:
        - "@@controller1@@"
Note
Note: Reserved Indicators in YAML

Note that the characters @ and ` are reserved indicators in YAML. They can appear anywhere in a string except at the beginning. Therefore a string such as @@controller1@@ needs to be quoted using double quotes.

12.25.2 Top-level proposal attributes

barclamp

Name of the barclamp for this proposal (required).

name

Name of this proposal (optional; default is default). In build mode, if the proposal does not already exist, it will be created.

attributes

An optional nested hash containing any attributes for this proposal which deviate from the defaults for the barclamp.

In export mode, any attributes set to the default values are excluded to keep the YAML as short and readable as possible.

In build mode, these attributes are deep-merged with the current values for the proposal. If the proposal did not already exist, batch build will create it first. The attributes are merged with the default values for the barclamp's proposal.

wipe_attributes

An optional array of paths to nested attributes which should be removed from the proposal.

Each path is a period-delimited sequence of attributes; for example pacemaker.stonith.sbd.nodes would remove all SBD nodes from the proposal if it already exists. If a path segment contains a period, it should be escaped with a backslash, for example segment-one.segment\.two.segment_three.

This removal occurs before the deep merge described above. For example, think of a YAML file which includes a Pacemaker barclamp proposal where the wipe_attributes entry contains pacemaker.stonith.sbd.nodes. A batch build with this YAML file ensures that only SBD nodes listed in the attributes sibling hash are used at the end of the run. In contrast, without the wipe_attributes entry, the given SBD nodes would be appended to any SBD nodes already defined in the proposal.

deployment

A nested hash defining how and where this proposal should be deployed.

In build mode, this hash is deep-merged in the same way as the attributes hash, except that the array of elements for each Chef role is reset to the empty list before the deep merge. This behavior may change in the future.

12.25.3 Node Alias Substitutions

A string like @@node@@ (where node is a node alias) will be substituted for the name of that node, no matter where the string appears in the YAML file. For example, if controller1 is a Crowbar alias for node d52-54-02-77-77-02.mycloud.com, then @@controller1@@ will be substituted for that host name. This allows YAML files to be reused across environments.

12.25.4 Options

In addition to the standard options available to every crowbar subcommand (run crowbar batch --help for a full list), there are some extra options specifically for crowbar batch:

--include <barclamp[.proposal]>

Only include the barclamp / proposals given.

This option can be repeated multiple times. The inclusion value can either be the name of a barclamp (for example, pacemaker) or a specifically named proposal within the barclamp (for example, pacemaker.network_cluster).

If it is specified, then only the barclamp / proposals specified are included in the build or export operation, and all others are ignored.

--exclude <barclamp[.proposal]>

This option can be repeated multiple times. The exclusion value is the same format as for --include. The barclamps / proposals specified are excluded from the build or export operation.

--timeout <seconds>

Change the timeout for Crowbar API calls.

As Chef's run lists grow, some of the later OpenStack barclamp proposals (for example nova, horizon, or heat) can take over 5 or even 10 minutes to apply. Therefore you may need to increase this timeout to 900 seconds in some circumstances.