Applies to HPE Helion OpenStack 8

32 Configuring Load Balancer as a Service

Abstract

The HPE Helion OpenStack Neutron LBaaS service supports several load balancing providers. By default, both Octavia and the namespace HAProxy driver are configured to be used. We describe this in more detail here.

Warning
Warning

If you are planning to upgrade from earlier versions, please contact your driver providers to determine which drivers have been certified for use with HPE Helion OpenStack. Loading drivers not certified by HPE may cause serious issues with your cloud deployment.

The HPE Helion OpenStack Neutron LBaaS service supports several load balancing providers. By default, both Octavia and the namespace HAProxy driver are configured to be used. A user can specify which provider to use with the --provider flag upon load balancer creation.

Example:

tux > neutron lbaas-loadbalancer-create --name NAME --provider \
  [octavia|haproxy] SUBNET

If you do not specify the --provider option it will default to Octavia. The Octavia driver provides more functionality than the HAProxy namespace driver which is deprecated. The HAProxy namespace driver will be retired in a future version of HPE Helion OpenStack.

There are additional drivers for third-party hardware load balancers. Please refer to the vendor directly. The neutron service-provider-list command displays not only the currently installed load balancer drivers but also other installed services such as VPN. You can see a list of available services as follows:

tux > neutron service-provider-list
+----------------+----------+---------+
| service_type   | name     | default |
+----------------+----------+---------+
| LOADBALANCERV2 | octavia  | True    |
| VPN            | openswan | True    |
| LOADBALANCERV2 | haproxy  | False   |
| LOADBALANCERV2 | octavia  | True    |
| VPN            | openswan | True    |
| LOADBALANCERV2 | haproxy  | False   |
+----------------+----------+---------+
Note
Note

The Octavia load balancer provider is listed as the default.

32.1 Prerequisites

You will need to create an external network and create an image to test LBaaS functionality. If you have already created an external network and registered an image, this step can be skipped.

Creating an external network: Section 28.4, “Creating an External Network”.

Creating and uploading a Glance image: Book “User Guide”, Chapter 10 “Creating and Uploading a Glance Image”.

32.2 Octavia Load Balancing Provider

The Octavia Load balancing provider bundled with HPE Helion OpenStack 8 is an operator grade load balancer for OpenStack. It is based on the OpenStack Pike version of Octavia. It differs from the namespace driver by starting a new Nova virtual machine to house the HAProxy load balancer software, called an amphora, that provides the load balancer function. A virtual machine for each load balancer requested provides a better separation of load balancers between tenants and makes it easier to grow load balancing capacity alongside compute node growth. Additionally, if the virtual machine fails for any reason Octavia will replace it with a replacement VM from a pool of spare VMs, assuming that the feature is configured.

Note
Note

The Health Monitor will not create or replace failed amphorae. If the pool of spare VMs is exhausted there will be no additional virtual machines to handle load balancing requests.

Octavia uses two-way SSL encryption to communicate with the amphora. There are demo Certificate Authority (CA) certificates included with HPE Helion OpenStack 8 in ~/scratch/ansible/next/ardana/ansible/roles/octavia-common/files on the Cloud Lifecycle Manager. For additional security in production deployments, all certificate authorities should be replaced with ones you generated yourself by running the following commands:

ardana > openssl genrsa -passout pass:foobar -des3 -out cakey.pem 2048
ardana > openssl req -x509 -passin pass:foobar -new -nodes -key cakey.pem -out ca_01.pem
ardana > openssl genrsa -passout pass:foobar -des3 -out servercakey.pem 2048
ardana > openssl req -x509 -passin pass:foobar -new -nodes -key cakey.pem -out serverca_01.pem

For more details refer to the openssl man page.

Note
Note

If you change the certificate authority and have amphora running with an old CA you will not be able to control the amphora. The amphoras will need to be failed over so they can utilize the new certificate. If you change the CA password for the server certificate you need to change that in the Octavia configuration files as well. For more information, see Book “Operations Guide”, Chapter 9 “Managing Networking”, Section 9.3 “Networking Service Overview”, Section 9.3.9 “Load Balancer: Octavia Driver Administration”, Section 9.3.9.2 “Tuning Octavia Installation”.

32.3 Setup of prerequisites

Octavia Network and Management Network Ports

The Octavia management network and Management network must have access to each other. If you have a configured firewall between the Octavia management network and Management network, you must open up the following ports to allow network traffic between the networks.

  • From Management network to Octavia network

    • TCP 9443 (amphora API)

  • From Octavia network to Management network

    • TCP 9876 (Octavia API)

    • UDP 5555 (Octavia Health Manager)

Installing the Amphora Image

Octavia uses Nova VMs for its load balancing function and HPE provides images used to boot those VMs called octavia-amphora.

Warning
Warning

Without these images the Octavia load balancer will not work.

Register the image. The OpenStack load balancing service (Octavia) does not automatically register the Amphora guest image.

  1. The full path and name for the Amphora image is /srv/www/suse-12.3/x86_64/repos/Cloud/suse/noarch/openstack-octavia-amphora-image-x86_64-0.1.0-1.21.noarch.rpm

    Switch to the ansible directory and register the image by giving the full path and name for the Amphora image as an argument to service_package:

    ardana > cd ~/scratch/ansible/next/ardana/ansible/
    ardana > ansible-playbook -i hosts/verb_hosts service-guest-image.yml \
    -e service_package=\
    /srv/www/suse-12.3/x86_64/repos/Cloud/suse/noarch/openstack-octavia-amphora-image-x86_64-0.1.0-1.21.noarch.rpm
  2. Source the service user (this can be done on a different computer)

    tux > source service.osrc
  3. Verify that the image was registered (this can be done on a computer with access to the Glance CLI client)

    tux > openstack image list
    +--------------------------------------+------------------------+---------
    | ID                                   | Name                   | Status |
    +--------------------------------------+------------------------+--------+
    ...
    | 1d4dd309-8670-46b6-801d-3d6af849b6a9 | octavia-amphora-x86_64 | active |
    ...
    Important
    Important

    In the example above, the status of the octavia-amphora-x86_64 image is active which means the image was successfully registered. If a status of the images is queued, you need to run the image registration again.

    If you run the registration by accident, the system will only upload a new image if the underlying image has been changed.

    Please be aware that if you have already created load balancers they will not receive the new image. Only load balancers created after the image has been successfully installed will use the new image. If existing load balancers need to be switched to the new image please follow the instructions in Book “Operations Guide”, Chapter 9 “Managing Networking”, Section 9.3 “Networking Service Overview”, Section 9.3.9 “Load Balancer: Octavia Driver Administration”, Section 9.3.9.2 “Tuning Octavia Installation”.

Setup network, subnet, router, security and IP's

If you have already created a network, subnet, router, security settings and IPs you can skip the following steps and go directly to creating the load balancers.

  1. Create a network.

    tux > neutron network create lb_net1
    +---------------------------+--------------------------------------+
    | Field                     | Value                                |
    +---------------------------+--------------------------------------+
    | admin_state_up            | True                                 |
    | id                        | 71a1ac88-30a3-48a3-a18b-d98509fbef5c |
    | mtu                       | 0                                    |
    | name                      | lb_net1                              |
    | provider:network_type     | vxlan                                |
    | provider:physical_network |                                      |
    | provider:segmentation_id  | 1061                                 |
    | router:external           | False                                |
    | shared                    | False                                |
    | status                    | ACTIVE                               |
    | subnets                   |                                      |
    | tenant_id                 | 4b31d0508f83437e83d8f4d520cda22f     |
    +---------------------------+--------------------------------------+
  2. Create a subnet.

    tux > neutron subnet create --name lb_subnet1 lb_net1 10.247.94.128/26 \
      --gateway 10.247.94.129
    +-------------------+----------------------------------------------------+
    | Field             | Value                                              |
    +-------------------+----------------------------------------------------+
    | allocation_pools  | {"start": "10.247.94.130", "end": "10.247.94.190"} |
    | cidr              | 10.247.94.128/26                                   |
    | dns_nameservers   |                                                    |
    | enable_dhcp       | True                                               |
    | gateway_ip        | 10.247.94.129                                      |
    | host_routes       |                                                    |
    | id                | 6fc2572c-53b3-41d0-ab63-342d9515f514               |
    | ip_version        | 4                                                  |
    | ipv6_address_mode |                                                    |
    | ipv6_ra_mode      |                                                    |
    | name              | lb_subnet1                                         |
    | network_id        | 71a1ac88-30a3-48a3-a18b-d98509fbef5c               |
    | subnetpool_id     |                                                    |
    | tenant_id         | 4b31d0508f83437e83d8f4d520cda22f                   |
    +-------------------+----------------------------------------------------+
  3. Create a router.

    tux > neutron router create --distributed False lb_router1
    +-----------------------+--------------------------------------+
    | Field                 | Value                                |
    +-----------------------+--------------------------------------+
    | admin_state_up        | True                                 |
    | distributed           | False                                |
    | external_gateway_info |                                      |
    | ha                    | False                                |
    | id                    | 6aafc9a9-93f6-4d7e-94f2-3068b034b823 |
    | name                  | lb_router1                           |
    | routes                |                                      |
    | status                | ACTIVE                               |
    | tenant_id             | 4b31d0508f83437e83d8f4d520cda22f     |
    +-----------------------+--------------------------------------+
  4. Add interface to router. In the following example, the interface 426c5898-f851-4f49-b01f-7a6fe490410c will be added to the router lb_router1.

    tux > neutron router add subnet lb_router1 lb_subnet1
  5. Set gateway for router.

    tux > neutron router set lb_router1 ext-net
  6. Check networks.

    tux > neutron network list
    +-----------------------------+------------------+-----------------------------+
    | id                          | name             | subnets                     |
    +-----------------------------+------------------+-----------------------------+
    | d3cb12a6-a000-4e3e-         | ext-net          | f4152001-2500-4ebe-ba9d-    |
    | 82c4-ee04aa169291           |                  | a8d6149a50df 10.247.96.0/23 |
    | 8306282a-3627-445a-a588-c18 | OCTAVIA-MGMT-NET | f00299f8-3403-45ae-ac4b-    |
    | 8b6a13163                   |                  | 58af41d57bdc                |
    |                             |                  | 10.247.94.128/26            |
    | 71a1ac88-30a3-48a3-a18b-    | lb_net1          | 6fc2572c-                   |
    | d98509fbef5c                |                  | 53b3-41d0-ab63-342d9515f514 |
    |                             |                  | 10.247.94.128/26            |
    +-----------------------------+------------------+-----------------------------+
  7. Create security group.

    tux > neutron security group create lb_secgroup1
    +----------------+---------------------------------------------------------------------------+
    | Field          | Value                                                                     |
    +----------------+---------------------------------------------------------------------------+
    | description    |                                                                           |
    | id             | 75343a54-83c3-464c-8773-802598afaee9                                      |
    | name           | lb_secgroup1                                                              |
    | security group | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,|
    |    rules       | "protocol": null, "tenant_id": "4b31d...da22f", "port_range_max": null,   |
    |                | "security_group_id": "75343a54-83c3-464c-8773-802598afaee9",              |
    |                | "port_range_min": null, "ethertype": "IPv4", "id": "20ae3...97a7a"}       |
    |                | {"remote_group_id": null, "direction": "egress",                          |
    |                | "remote_ip_prefix": null, "protocol": null, "tenant_id": "4b31...a22f",   |
    |                | "port_range_max": null, "security_group_id": "7534...98afaee9",           |
    |                | "port_range_min": null, "ethertype": "IPv6", "id": "563c5c...aaef9"}      |
    | tenant_id      | 4b31d0508f83437e83d8f4d520cda22f                                          |
    +----------------+---------------------------------------------------------------------------+
  8. Create icmp security group rule.

    tux > neutron security group rule create lb_secgroup1 --protocol icmp
    +-------------------+--------------------------------------+
    | Field             | Value                                |
    +-------------------+--------------------------------------+
    | direction         | ingress                              |
    | ethertype         | IPv4                                 |
    | id                | 16d74150-a5b2-4cf6-82eb-a6c49a972d93 |
    | port_range_max    |                                      |
    | port_range_min    |                                      |
    | protocol          | icmp                                 |
    | remote_group_id   |                                      |
    | remote_ip_prefix  |                                      |
    | security_group_id | 75343a54-83c3-464c-8773-802598afaee9 |
    | tenant_id         | 4b31d0508f83437e83d8f4d520cda22f     |
    +-------------------+--------------------------------------+
  9. Create TCP port 22 rule.

    tux > neutron security group rule create lb_secgroup1 --protocol tcp \
      --port-range-min 22 --port-range-max 22
    +-------------------+--------------------------------------+
    | Field             | Value                                |
    +-------------------+--------------------------------------+
    | direction         | ingress                              |
    | ethertype         | IPv4                                 |
    | id                | 472d3c8f-c50f-4ad2-97a1-148778e73af5 |
    | port_range_max    | 22                                   |
    | port_range_min    | 22                                   |
    | protocol          | tcp                                  |
    | remote_group_id   |                                      |
    | remote_ip_prefix  |                                      |
    | security_group_id | 75343a54-83c3-464c-8773-802598afaee9 |
    | tenant_id         | 4b31d0508f83437e83d8f4d520cda22f     |
    +-------------------+--------------------------------------+
  10. Create TCP port 80 rule.

    tux > neutron security group rule create lb_secgroup1 --protocol tcp \
      --port-range-min 80 --port-range-max 80
    +-------------------+--------------------------------------+
    | Field             | Value                                |
    +-------------------+--------------------------------------+
    | direction         | ingress                              |
    | ethertype         | IPv4                                 |
    | id                | 10a76cad-8b1c-46f6-90e8-5dddd279e5f7 |
    | port_range_max    | 80                                   |
    | port_range_min    | 80                                   |
    | protocol          | tcp                                  |
    | remote_group_id   |                                      |
    | remote_ip_prefix  |                                      |
    | security_group_id | 75343a54-83c3-464c-8773-802598afaee9 |
    | tenant_id         | 4b31d0508f83437e83d8f4d520cda22f     |
    +-------------------+--------------------------------------+
  11. If you have not already created a keypair, create one now with nova keypair create. You will use the keypair to boot images.

    tux > nova keypair create lb_kp1 > lb_kp1.pem
    
    chmod 400 lb_kp1.pem
    
    cat lb_kp1.pem
    -----BEGIN RSA PRIVATE KEY-----
    MIIEqAIBAAKCAQEAkbW5W/XWGRGC0LAJI7lttR7EdDfiTDeFJ7A9b9Cff+OMXjhx
    WL26eKIr+jp8DR64YjV2mNnQLsDyCxekFpkyjnGRId3KVAeV5sRQqXgtaCXI+Rvd
    IyUtd8p1cp3DRgTd1dxO0oL6bBmwrZatNrrRn4HgKc2c7ErekeXrwLHyE0Pia/pz
    C6qs0coRdfIeXxsmS3kXExP0YfsswRS/OyDl8QhRAF0ZW/zV+DQIi8+HpLZT+RW1
    8sTTYZ6b0kXoH9wLER4IUBj1I1IyrYdxlAhe2VIn+tF0Ec4nDBn1py9iwEfGmn0+
    N2jHCJAkrK/QhWdXO4O8zeXfL4mCZ9FybW4nzQIDAQABAoIBACe0PvgB+v8FuIGp
    FjR32J8b7ShF+hIOpufzrCoFzRCKLruV4bzuphstBZK/0QG6Nz/7lX99Cq9SwCGp
    pXrK7+3EoGl8CB/xmTUylVA4gRb6BNNsdkuXW9ZigrJirs0rkk8uIwRV0GsYbP5A
    Kp7ZNTmjqDN75aC1ngRfhGgTlQUOdxBH+4xSb7GukekD13V8V5MF1Qft089asdWp
    l/TpvhYeW9O92xEnZ3qXQYpXYQgEFQoM2PKa3VW7FGLgfw9gdS/MSqpHuHGyKmjl
    uT6upUX+Lofbe7V+9kfxuV32sLL/S5YFvkBy2q8VpuEV1sXI7O7Sc411WX4cqmlb
    YoFwhrkCggCBALkYE7OMTtdCAGcMotJhTiiS5l8d4U/fn1x0zus43XV5Y7wCnMuU
    r5vCoK+a+TR9Ekzc/GjccAx7Wz/YYKp6G8FXW114dLcADXZjqjIlX7ifUud4sLCS
    y+x3KAJa7LqyzH53I6FOts9RaB5xx4gZ2WjcJquCTbATZWj7j1yGeNgvAoIAgQDJ
    h0r0Te5IliYbCRg+ES9YRZzH/PSLuIn00bbLvpOPNEoKe2Pxs+KI8Fqp6ZIDAB3c
    4EPOK5QrJvAny9Z58ZArrNZi15t84KEVAkWUATl+c4SmHc8sW/atgmUoqIzgDQXe
    AlwadHLY7JCdg7EYDuUxuTKLLOdqfpf6fKkhNxtEwwKCAIAMxi+d5aIPUxvKAOI/
    2L1XKYRCrkI9i/ZooBsjusH1+JG8iQWfOzy/aDhExlJKoBMiQOIerpABHIZYqqtJ
    OLIvrsK8ebK8aoGDWS+G1HN9v2kuVnMDTK5MPJEDUJkj7XEVjU1lNZSCTGD+MOYP
    a5FInmEA1zZbX4tRKoNjZFh0uwKCAIEAiLs7drAdOLBu4C72fL4KIljwu5t7jATD
    zRAwduIxmZq/lYcMU2RaEdEJonivsUt193NNbeeRWwnLLSUWupvT1l4pAt0ISNzb
    TbbB4F5IVOwpls9ozc8DecubuM9K7YTIc02kkepqNZWjtMsx74HDrU3a5iSsSkvj
    73Z/BeMupCMCggCAS48BsrcsDsHSHE3tO4D8pAIr1r+6WPaQn49pT3GIrdQNc7aO
    d9PfXmPoe/PxUlqaXoNAvT99+nNEadp+GTId21VM0Y28pn3EkIGE1Cqoeyl3BEO8
    f9SUiRNruDnH4F4OclsDBlmqWXImuXRfeiDHxM8X03UDZoqyHmGD3RqA53I=
    -----END RSA PRIVATE KEY-----
  12. Check and boot images.

    tux > nova image list
    +--------------+------------------------+--------+--------+
    | ID           | Name                   | Status | Server |
    +--------------+------------------------+--------+--------+
    | 0526d...7f39 | cirros-0.4.0-x86_64    | ACTIVE |        |
    | 8aa51...8f2f | octavia-amphora-x86_64 | ACTIVE |        |
    +--------------+------------------------+--------+--------+

    Boot first VM.

    tux > nova server create --flavor 1 --image 04b1528b-b1e2-45d4-96d1-fbe04c6b2efd --key-name lb_kp1 \
      --security-groups lb_secgroup1 --nic net-id=71a1ac88-30a3-48a3-a18b-d98509fbef5c \
      lb_vm1 --poll
    +--------------------------------------+--------------------------------------+
    | Property                             | Value                                |
    +--------------------------------------+--------------------------------------+
    | OS-DCF:diskConfig                    | MANUAL                               |
    | OS-EXT-AZ:availability_zone          |                                      |
    | OS-EXT-SRV-ATTR:host                 | -                                    |
    | OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                    |
    | OS-EXT-SRV-ATTR:instance_name        | instance-00000031                    |
    | OS-EXT-STS:power_state               | 0                                    |
    | OS-EXT-STS:task_state                | scheduling                           |
    | OS-EXT-STS:vm_state                  | building                             |
    | OS-SRV-USG:launched_at               | -                                    |
    | OS-SRV-USG:terminated_at             | -                                    |
    | accessIPv4                           |                                      |
    | accessIPv6                           |                                      |
    | adminPass                            | NeVvhP5E8iCy                         |
    | config_drive                         |                                      |
    | created                              | 2016-06-15T16:53:00Z                 |
    | flavor                               | m1.tiny (1)                          |
    | hostId                               |                                      |
    | id                                   | dfdfe15b-ce8d-469c-a9d8-2cea0e7ca287 |
    | image                                | cirros-0.4.0-x86_64 (0526d...7f39)   |
    | key_name                             | lb_kp1                               |
    | metadata                             | {}                                   |
    | name                                 | lb_vm1                               |
    | os-extended-volumes:volumes_attached | []                                   |
    | progress                             | 0                                    |
    | security_groups                      | lb_secgroup1                         |
    | status                               | BUILD                                |
    | tenant_id                            | 4b31d0508f83437e83d8f4d520cda22f     |
    | updated                              | 2016-06-15T16:53:00Z                 |
    | user_id                              | fd471475faa84680b97f18e55847ec0a     |
    +--------------------------------------+--------------------------------------+
    
                Server building... 100% complete
                Finished

    Boot second VM.

    tux > nova server create --flavor 1 --image 04b1528b-b1e2-45d4-96d1-fbe04c6b2efd --key-name lb_kp1 \
      --security-groups lb_secgroup1 --nic net-id=71a1ac88-30a3-48a3-a18b-d98509fbef5c \
      lb_vm2 --poll
    +--------------------------------------+---------------------------------------+
    | Property                             | Value                                 |
    +--------------------------------------+---------------------------------------+
    | OS-DCF:diskConfig                    | MANUAL                                |
    | OS-EXT-AZ:availability_zone          |                                       |
    | OS-EXT-SRV-ATTR:host                 | -                                     |
    | OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                     |
    | OS-EXT-SRV-ATTR:instance_name        | instance-00000034                     |
    | OS-EXT-STS:power_state               | 0                                     |
    | OS-EXT-STS:task_state                | scheduling                            |
    | OS-EXT-STS:vm_state                  | building                              |
    | OS-SRV-USG:launched_at               | -                                     |
    | OS-SRV-USG:terminated_at             | -                                     |
    | accessIPv4                           |                                       |
    | accessIPv6                           |                                       |
    | adminPass                            | 3nFXjNrTrmNm                          |
    | config_drive                         |                                       |
    | created                              | 2016-06-15T16:55:10Z                  |
    | flavor                               | m1.tiny (1)                           |
    | hostId                               |                                       |
    | id                                   | 3844bb10-2c61-4327-a0d4-0c043c674344  |
    | image                                | cirros-0.4.0-x86_64 (0526d...7f39)    |
    | key_name                             | lb_kp1                                |
    | metadata                             | {}                                    |
    | name                                 | lb_vm2                                |
    | os-extended-volumes:volumes_attached | []                                    |
    | progress                             | 0                                     |
    | security_groups                      | lb_secgroup1                          |
    | status                               | BUILD                                 |
    | tenant_id                            | 4b31d0508f83437e83d8f4d520cda22f      |
    | updated                              | 2016-06-15T16:55:09Z                  |
    | user_id                              | fd471475faa84680b97f18e55847ec0a      |
    +--------------------------------------+---------------------------------------+
    
                Server building... 100% complete
                Finished
  13. List the running VM with nova list

    tux > nova server list
    +----------------+--------+--------+------------+-------------+-----------------------+
    | ID             | Name   | Status | Task State | Power State | Networks              |
    +----------------+--------+--------+------------+-------------+-----------------------+
    | dfdfe...7ca287 | lb_vm1 | ACTIVE | -          | Running     | lb_net1=10.247.94.132 |
    | 3844b...674344 | lb_vm2 | ACTIVE | -          | Running     | lb_net1=10.247.94.133 |
    +----------------+--------+--------+------------+-------------+-----------------------+
  14. Check ports.

    tux > neutron port list
    +----------------+------+-------------------+--------------------------------+
    | id             | name | mac_address       | fixed_ips                      |
    +----------------+------+-------------------+--------------------------------+
    ...
    | 7e5e0...36450e |      | fa:16:3e:66:fd:2e | {"subnet_id": "6fc25...5f514", |
    |                |      |                   | "ip_address": "10.247.94.132"} |
    | ca95c...b36854 |      | fa:16:3e:e0:37:c4 | {"subnet_id": "6fc25...5f514", |
    |                |      |                   | "ip_address": "10.247.94.133"} |
    +----------------+------+-------------------+--------------------------------+
  15. Create the first floating IP.

    tux > neutron floating ip create ext-net --port-id 7e5e0038-88cf-4f97-a366-b58cd836450e
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | fixed_ip_address    | 10.247.94.132                        |
    | floating_ip_address | 10.247.96.26                         |
    | floating_network_id | d3cb12a6-a000-4e3e-82c4-ee04aa169291 |
    | id                  | 3ce608bf-8835-4638-871d-0efe8ebf55ef |
    | port_id             | 7e5e0038-88cf-4f97-a366-b58cd836450e |
    | router_id           | 6aafc9a9-93f6-4d7e-94f2-3068b034b823 |
    | status              | DOWN                                 |
    | tenant_id           | 4b31d0508f83437e83d8f4d520cda22f     |
    +---------------------+--------------------------------------+
  16. Create the second floating IP.

    tux > neutron floating ip create ext-net --port-id ca95cc24-4e8f-4415-9156-7b519eb36854
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | fixed_ip_address    | 10.247.94.133                        |
    | floating_ip_address | 10.247.96.27                         |
    | floating_network_id | d3cb12a6-a000-4e3e-82c4-ee04aa169291 |
    | id                  | 680c0375-a179-47cb-a8c5-02b836247444 |
    | port_id             | ca95cc24-4e8f-4415-9156-7b519eb36854 |
    | router_id           | 6aafc9a9-93f6-4d7e-94f2-3068b034b823 |
    | status              | DOWN                                 |
    | tenant_id           | 4b31d0508f83437e83d8f4d520cda22f     |
    +---------------------+--------------------------------------+
  17. List the floating IP's.

    tux > neutron floating ip list
    +----------------+------------------+---------------------+---------------+
    | id             | fixed_ip_address | floating_ip_address | port_id       |
    +----------------+------------------+---------------------+---------------+
    | 3ce60...bf55ef | 10.247.94.132    | 10.247.96.26        | 7e5e0...6450e |
    | 680c0...247444 | 10.247.94.133    | 10.247.96.27        | ca95c...36854 |
    +----------------+------------------+---------------------+---------------+
  18. Show first Floating IP.

    tux > neutron floating ip show 3ce608bf-8835-4638-871d-0efe8ebf55ef
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | fixed_ip_address    | 10.247.94.132                        |
    | floating_ip_address | 10.247.96.26                         |
    | floating_network_id | d3cb12a6-a000-4e3e-82c4-ee04aa169291 |
    | id                  | 3ce608bf-8835-4638-871d-0efe8ebf55ef |
    | port_id             | 7e5e0038-88cf-4f97-a366-b58cd836450e |
    | router_id           | 6aafc9a9-93f6-4d7e-94f2-3068b034b823 |
    | status              | ACTIVE                               |
    | tenant_id           | 4b31d0508f83437e83d8f4d520cda22f     |
    +---------------------+--------------------------------------+
  19. Show second Floating IP.

    tux > neutron floating ip show 680c0375-a179-47cb-a8c5-02b836247444
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | fixed_ip_address    | 10.247.94.133                        |
    | floating_ip_address | 10.247.96.27                         |
    | floating_network_id | d3cb12a6-a000-4e3e-82c4-ee04aa169291 |
    | id                  | 680c0375-a179-47cb-a8c5-02b836247444 |
    | port_id             | ca95cc24-4e8f-4415-9156-7b519eb36854 |
    | router_id           | 6aafc9a9-93f6-4d7e-94f2-3068b034b823 |
    | status              | ACTIVE                               |
    | tenant_id           | 4b31d0508f83437e83d8f4d520cda22f     |
    +---------------------+--------------------------------------+
  20. Ping first Floating IP.

    tux > ping -c 1 10.247.96.26
    PING 10.247.96.26 (10.247.96.26) 56(84) bytes of data.
    64 bytes from 10.247.96.26: icmp_seq=1 ttl=62 time=3.50 ms
    
    --- 10.247.96.26 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 3.505/3.505/3.505/0.000 ms
  21. Ping second Floating IP.

    tux > ping -c 1 10.247.96.27
    PING 10.247.96.27 (10.247.96.27) 56(84) bytes of data.
    64 bytes from 10.247.96.27: icmp_seq=1 ttl=62 time=3.47 ms
    
    --- 10.247.96.27 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 3.473/3.473/3.473/0.000 ms
  22. Listing the VMs will give you both the fixed and floating IP's for each virtual machine.

    tux > nova list
    +---------------+--------+--------+-------+---------+-------------------------------------+
    | ID            | Name   | Status | Task  | Power   | Networks                            |
    |               |        |        | State | State   |                                     |
    +---------------+--------+--------+-------+---------+-------------------------------------+
    | dfdfe...ca287 | lb_vm1 | ACTIVE | -     | Running | lb_net1=10.247.94.132, 10.247.96.26 |
    | 3844b...74344 | lb_vm2 | ACTIVE | -     | Running | lb_net1=10.247.94.133, 10.247.96.27 |
    +---------------+--------+--------+-------+---------+-------------------------------------+
  23. List Floating IP's.

    tux > neutron floating ip list
    +---------------+------------------+---------------------+-----------------+
    | id            | fixed_ip_address | floating_ip_address | port_id         |
    +---------------+------------------+---------------------+-----------------+
    | 3ce60...f55ef | 10.247.94.132    | 10.247.96.26        | 7e5e00...36450e |
    | 680c0...47444 | 10.247.94.133    | 10.247.96.27        | ca95cc...b36854 |
    +---------------+------------------+---------------------+-----------------+

32.4 Create Load Balancers

The following steps will setup new Octavia Load Balancers.

Note
Note

The following examples assume names and values from the previous section.

  1. Create load balancer for subnet

    tux > neutron lbaas-loadbalancer-create --provider octavia \
      --name lb1 6fc2572c-53b3-41d0-ab63-342d9515f514
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | admin_state_up      | True                                 |
    | description         |                                      |
    | id                  | 3d9170a1-8605-43e6-9255-e14a8b4aae53 |
    | listeners           |                                      |
    | name                | lb1                                  |
    | operating_status    | OFFLINE                              |
    | provider            | octavia                              |
    | provisioning_status | PENDING_CREATE                       |
    | tenant_id           | 4b31d0508f83437e83d8f4d520cda22f     |
    | vip_address         | 10.247.94.134                        |
    | vip_port_id         | da28aed3-0eb4-4139-afcf-2d8fd3fc3c51 |
    | vip_subnet_id       | 6fc2572c-53b3-41d0-ab63-342d9515f514 |
    +---------------------+--------------------------------------+
  2. List load balancers. You will need to wait until the load balancer provisioning_statusis ACTIVE before proceeding to the next step.

    tux > neutron lbaas-loadbalancer-list
    +---------------+------+---------------+---------------------+----------+
    | id            | name | vip_address   | provisioning_status | provider |
    +---------------+------+---------------+---------------------+----------+
    | 3d917...aae53 | lb1  | 10.247.94.134 | ACTIVE              | octavia  |
    +---------------+------+---------------+---------------------+----------+
  3. Once the load balancer is created, create the listener. This may take some time.

    tux > neutron lbaas-listener-create --loadbalancer lb1 \
      --protocol HTTP --protocol-port=80 --name lb1_listener
    +---------------------------+------------------------------------------------+
    | Field                     | Value                                          |
    +---------------------------+------------------------------------------------+
    | admin_state_up            | True                                           |
    | connection_limit          | -1                                             |
    | default_pool_id           |                                                |
    | default_tls_container_ref |                                                |
    | description               |                                                |
    | id                        | c723b5c8-e2df-48d5-a54c-fc240ac7b539           |
    | loadbalancers             | {"id": "3d9170a1-8605-43e6-9255-e14a8b4aae53"} |
    | name                      | lb1_listener                                   |
    | protocol                  | HTTP                                           |
    | protocol_port             | 80                                             |
    | sni_container_refs        |                                                |
    | tenant_id                 | 4b31d0508f83437e83d8f4d520cda22f               |
    +---------------------------+------------------------------------------------+
  4. Create the load balancing pool. During the creation of the load balancing pool, the status for the load balancer goes to PENDING_UPDATE. Use neutron lbaas-loadbalancer-list to watch for the change to ACTIVE. Once the load balancer returns to ACTIVE, proceed with the next step.

    tux > neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN \
      --listener lb1_listener --protocol HTTP --name lb1_pool
    +---------------------+------------------------------------------------+
    | Field               | Value                                          |
    +---------------------+------------------------------------------------+
    | admin_state_up      | True                                           |
    | description         |                                                |
    | healthmonitor_id    |                                                |
    | id                  | 0f5951ee-c2a0-4e62-ae44-e1491a8988e1           |
    | lb_algorithm        | ROUND_ROBIN                                    |
    | listeners           | {"id": "c723b5c8-e2df-48d5-a54c-fc240ac7b539"} |
    | members             |                                                |
    | name                | lb1_pool                                       |
    | protocol            | HTTP                                           |
    | session_persistence |                                                |
    | tenant_id           | 4b31d0508f83437e83d8f4d520cda22f               |
    +---------------------+------------------------------------------------+
  5. Create first member of the load balancing pool.

    tux > neutron lbaas-member-create --subnet 6fc2572c-53b3-41d0-ab63-342d9515f514 \
      --address 10.247.94.132 --protocol-port 80 lb1_pool
    +----------------+--------------------------------------+
    | Field          | Value                                |
    +----------------+--------------------------------------+
    | address        | 10.247.94.132                        |
    | admin_state_up | True                                 |
    | id             | 61da1e21-e0ae-4158-935a-c909a81470e1 |
    | protocol_port  | 80                                   |
    | subnet_id      | 6fc2572c-53b3-41d0-ab63-342d9515f514 |
    | tenant_id      | 4b31d0508f83437e83d8f4d520cda22f     |
    | weight         | 1                                    |
    +----------------+--------------------------------------+
  6. Create the second member.

    tux > neutron lbaas-member-create --subnet 6fc2572c-53b3-41d0-ab63-342d9515f514 \
      --address 10.247.94.133 --protocol-port 80 lb1_pool
    +----------------+--------------------------------------+
    | Field          | Value                                |
    +----------------+--------------------------------------+
    | address        | 10.247.94.133                        |
    | admin_state_up | True                                 |
    | id             | 459c7f21-46f7-49e8-9d10-dc7da09f8d5a |
    | protocol_port  | 80                                   |
    | subnet_id      | 6fc2572c-53b3-41d0-ab63-342d9515f514 |
    | tenant_id      | 4b31d0508f83437e83d8f4d520cda22f     |
    | weight         | 1                                    |
    +----------------+--------------------------------------+
  7. You should check to make sure the load balancer is active and check the pool members.

    tux > neutron lbaas-loadbalancer-list
    +----------------+------+---------------+---------------------+----------+
    | id             | name | vip_address   | provisioning_status | provider |
    +----------------+------+---------------+---------------------+----------+
    | 3d9170...aae53 | lb1  | 10.247.94.134 | ACTIVE              | octavia  |
    +----------------+------+---------------+---------------------+----------+
    
    neutron lbaas-member-list lb1_pool
    +---------------+---------------+---------------+--------+---------------+----------------+
    | id            | address       | protocol_port | weight | subnet_id     | admin_state_up |
    +---------------+---------------+---------------+--------+---------------+----------------+
    | 61da1...470e1 | 10.247.94.132 |            80 |      1 | 6fc25...5f514 | True           |
    | 459c7...f8d5a | 10.247.94.133 |            80 |      1 | 6fc25...5f514 | True           |
    +---------------+---------------+---------------+--------+---------------+----------------+
  8. You can view the details of the load balancer, listener and pool.

    tux > neutron lbaas-loadbalancer-show 3d9170a1-8605-43e6-9255-e14a8b4aae53
    +---------------------+------------------------------------------------+
    | Field               | Value                                          |
    +---------------------+------------------------------------------------+
    | admin_state_up      | True                                           |
    | description         |                                                |
    | id                  | 3d9170a1-8605-43e6-9255-e14a8b4aae53           |
    | listeners           | {"id": "c723b5c8-e2df-48d5-a54c-fc240ac7b539"} |
    | name                | lb1                                            |
    | operating_status    | ONLINE                                         |
    | provider            | octavia                                        |
    | provisioning_status | ACTIVE                                         |
    | tenant_id           | 4b31d0508f83437e83d8f4d520cda22f               |
    | vip_address         | 10.247.94.134                                  |
    | vip_port_id         | da28aed3-0eb4-4139-afcf-2d8fd3fc3c51           |
    | vip_subnet_id       | 6fc2572c-53b3-41d0-ab63-342d9515f514           |
    +---------------------+------------------------------------------------+
    
    neutron lbaas-listener-list
    +-------------+-----------------+--------------+----------+---------------+----------------+
    | id          | default_pool_id | name         | protocol | protocol_port | admin_state_up |
    +-------------+-----------------+--------------+----------+---------------+----------------+
    | c723...b539 | 0f595...8988e1  | lb1_listener | HTTP     |            80 | True           |
    +-------------+-----------------+--------------+----------+---------------+----------------+
    
    neutron lbaas-pool-list
    +--------------------------------------+----------+----------+----------------+
    | id                                   | name     | protocol | admin_state_up |
    +--------------------------------------+----------+----------+----------------+
    | 0f5951ee-c2a0-4e62-ae44-e1491a8988e1 | lb1_pool | HTTP     | True           |
    +--------------------------------------+----------+----------+----------------+

32.5 Create Floating IPs for Load Balancer

To create the floating IP's for the load balancer, you will need to list the current ports to get the load balancer id. Once you have the id, you can then create the floating IP.

  1. List the current ports.

    tux > neutron port list
    +--------------+---------------+-------------------+-----------------------------------------+
    | id           | name          | mac_address       | fixed_ips                               |
    +--------------+---------------+-------------------+-----------------------------------------+
    ...
    | 7e5e...6450e |               | fa:16:3e:66:fd:2e | {"subnet_id": "6fc2572c-                |
    |              |               |                   | 53b3-41d0-ab63-342d9515f514",           |
    |              |               |                   | "ip_address": "10.247.94.132"}          |
    | a3d0...55efe |               | fa:16:3e:91:a2:5b | {"subnet_id": "f00299f8-3403-45ae-ac4b- |
    |              |               |                   | 58af41d57bdc", "ip_address":            |
    |              |               |                   | "10.247.94.142"}                        |
    | ca95...36854 |               | fa:16:3e:e0:37:c4 | {"subnet_id": "6fc2572c-                |
    |              |               |                   | 53b3-41d0-ab63-342d9515f514",           |
    |              |               |                   | "ip_address": "10.247.94.133"}          |
    | da28...c3c51 | loadbalancer- | fa:16:3e:1d:a2:1c | {"subnet_id": "6fc2572c-                |
    |              | 3d917...aae53 |                   | 53b3-41d0-ab63-342d9515f514",           |
    |              |               |                   | "ip_address": "10.247.94.134"}          |
    +--------------+---------------+-------------------+-----------------------------------------+
  2. Create the floating IP for the load balancer.

    tux > neutron floating ip create ext-net --port-id da28aed3-0eb4-4139-afcf-2d8fd3fc3c51
    Created a new floatingip:
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | fixed_ip_address    | 10.247.94.134                        |
    | floating_ip_address | 10.247.96.28                         |
    | floating_network_id | d3cb12a6-a000-4e3e-82c4-ee04aa169291 |
    | id                  | 9a3629bd-b0a6-474c-abe9-89c6ecb2b22c |
    | port_id             | da28aed3-0eb4-4139-afcf-2d8fd3fc3c51 |
    | router_id           | 6aafc9a9-93f6-4d7e-94f2-3068b034b823 |
    | status              | DOWN                                 |
    | tenant_id           | 4b31d0508f83437e83d8f4d520cda22f     |
    +---------------------+--------------------------------------+

32.6 Testing the Octavia Load Balancer

To test the load balancers, create the following web server script so you can run it on each virtual machine. You will use curl <ip address> to test if the load balance services are responding properly.

  1. Start running web servers on both of the virtual machines. Create the webserver.sh script with below contents. In this example, the port is 80.

    tux > vi webserver.sh
    
    #!/bin/bash
    
    MYIP=$(/sbin/ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print $1}');
    while true; do
        echo -e "HTTP/1.0 200 OK
    
    Welcome to $MYIP" | sudo nc -l -p 80
    done
  2. Deploy the web server and run it on the first virtual machine.

    ardana > ssh-keygen -R 10.247.96.26
    /home/ardana/.ssh/known_hosts updated.
    Original contents retained as /home/ardana/.ssh/known_hosts.old
    
    scp -o StrictHostKeyChecking=no -i lb_kp1.pem webserver.sh cirros@10.247.96.26:
    webserver.sh                                      100%  263     0.3KB/s   00:00
    
    ssh -o StrictHostKeyChecking=no -i lb_kp1.pem cirros@10.247.96.26 'chmod +x ./webserver.sh'
    ssh -o StrictHostKeyChecking=no -i lb_kp1.pem cirros@10.247.96.26 ./webserver.sh
  3. Test the first web server.

    tux > curl 10.247.96.26
     Welcome to 10.247.94.132
  4. Deploy and start the web server on the second virtual machine like you did in the previous steps. Once the second web server is running, list the floating IPs.

    tux > neutron floating ip list
    +----------------+------------------+---------------------+---------------+
    | id             | fixed_ip_address | floating_ip_address | port_id       |
    +----------------+------------------+---------------------+---------------+
    | 3ce60...bf55ef | 10.247.94.132    | 10.247.96.26        | 7e5e0...6450e |
    | 680c0...247444 | 10.247.94.133    | 10.247.96.27        | ca95c...36854 |
    | 9a362...b2b22c | 10.247.94.134    | 10.247.96.28        | da28a...c3c51 |
    +----------------+------------------+---------------------+---------------+
  5. Display the floating IP for the load balancer.

    tux > neutron floating ip show 9a3629bd-b0a6-474c-abe9-89c6ecb2b22c
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | fixed_ip_address    | 10.247.94.134                        |
    | floating_ip_address | 10.247.96.28                         |
    | floating_network_id | d3cb12a6-a000-4e3e-82c4-ee04aa169291 |
    | id                  | 9a3629bd-b0a6-474c-abe9-89c6ecb2b22c |
    | port_id             | da28aed3-0eb4-4139-afcf-2d8fd3fc3c51 |
    | router_id           | 6aafc9a9-93f6-4d7e-94f2-3068b034b823 |
    | status              | ACTIVE                               |
    | tenant_id           | 4b31d0508f83437e83d8f4d520cda22f     |
    +---------------------+--------------------------------------+
  6. Finally, test the load balancing.

    tux > curl 10.247.96.28
    Welcome to 10.247.94.132
    
    tux > curl 10.247.96.28
    Welcome to 10.247.94.133
    
    tux > curl 10.247.96.28
    Welcome to 10.247.94.132
    
    tux > curl 10.247.96.28
    Welcome to 10.247.94.133
    
    tux > curl 10.247.96.28
    Welcome to 10.247.94.132
    
    tux > curl 10.247.96.28
    Welcome to 10.247.94.133
Print this page