43 Configuring Load Balancer as a Service #
The SUSE OpenStack Cloud neutron LBaaS service supports several load balancing providers. By default, both Octavia and the namespace HAProxy driver are configured to be used.
The SUSE OpenStack Cloud neutron LBaaS service supports several load balancing providers. By default, both Octavia and the namespace HAProxy driver are configured for use.
If you do not specify the --provider
option it will
default to Octavia. The Octavia driver provides more functionality than the
HAProxy namespace driver which is deprecated. The HAProxy namespace driver
will be retired in a future version of SUSE OpenStack Cloud.
There are additional drivers for third-party hardware load balancers. Please
refer to the vendor directly. The openstack network service provider
list
command displays the currently installed load balancer
drivers as well as other installed services such as VPN.
43.1 Summary #
The following procedure demonstrates how to setup a load balancer, and test that the round robin load balancing is successful between the two servers. For demonstration purposes, the security group rules and floating IPs are applied to the web servers. Adjust these configuration parameters if needed.
43.2 Prerequisites #
You need have an external network and a registered image to test LBaaS functionality.
Creating an external network: Section 38.4, “Creating an External Network”.
Creating and uploading a glance image: Section 38.3, “Uploading an Image for Use”.
43.3 Octavia Load Balancing Provider #
The Octavia Load balancing provider bundled with SUSE OpenStack Cloud 9 is an operator
grade load balancer for OpenStack. It is based on the OpenStack Rocky
version of Octavia. It differs from the namespace driver by starting a new
nova virtual machine (called an amphora
) to house the
HAProxy load balancer software that provides the load balancer function. A
virtual machine for each load balancer requested provides a better
separation of load balancers between tenants, and simplifies increasing load
balancing capacity along with compute node growth. Additionally, if the
virtual machine fails for any reason, Octavia will replace it with a
replacement VM from a pool of spare VMs, assuming that the feature is
configured.
The Health Monitor will not create or replace failed amphoras. If the pool of spare VMs is exhausted there will be no additional virtual machines to handle load balancing requests.
Octavia uses two-way SSL encryption to communicate with amphoras. There
are demo Certificate Authority (CA) certificates included with SUSE OpenStack Cloud 9 in
~/scratch/ansible/next/ardana/ansible/roles/octavia-common/files
on the Cloud Lifecycle Manager. For additional security in production deployments, replace all
certificate authorities with ones you generate by running the following
commands:
ardana >
openssl genrsa -passout pass:foobar -des3 -out cakey.pem 2048ardana >
openssl req -x509 -passin pass:foobar -new -nodes -key cakey.pem -out ca_01.pemardana >
openssl genrsa -passout pass:foobar -des3 -out servercakey.pem 2048ardana >
openssl req -x509 -passin pass:foobar -new -nodes -key cakey.pem -out serverca_01.pem
For more details refer to the openssl man page.
If you change the certificate authority and have amphoras running with an old CA, you will not be able to control the amphoras. The amphoras will need to be failed over so they can utilize the new certificate. If you change the CA password for the server certificate, you need to change that in the Octavia configuration files as well. For more information, see Section 10.4.9.2, “Tuning Octavia Installation”.
43.4 Prerequisite Setup #
Octavia Client Installation
The Octavia client must be installed in the control plane, see Chapter 40, Installing OpenStack Clients.
Octavia Network and Management Network Ports
The Octavia management network and Management network must have access to each other. If you have a configured firewall between the Octavia management network and Management network, you must open up the following ports to allow network traffic between the networks.
From Management network to Octavia network
TCP 9443 (amphora API)
From Octavia network to Management network
TCP 9876 (Octavia API)
UDP 5555 (Octavia Health Manager)
Optimizing the Octavia memory footprint
By default the Octavia health monitor will use up to 4.2GB of memory, this
can be safely reduced in entry scale environments by modifying
~/openstack/my_cloud/config/octavia/octavia-health-manager.conf.j2
to add the following lines in the [health manager]
section:
health_update_threads = 2 stats_update_threads = 2
After editing the octavia-health-manager.conf.j2
file,
run the config-processor-run, ready-deployment and
octavia-reconfigure playbooks to apply these changes. This will limit
the number of processes providing updates for the amphora health and stats.
Installing the Amphora Image
Octavia uses nova VMs for its load balancing function; SUSE
provides images used to boot those VMs called
octavia-amphora
.
Without these images the Octavia load balancer will not work.
Register the image. The OpenStack load balancing service (Octavia) does not automatically register the Amphora guest image.
The Amphora image is in an RPM package managed by zypper, display the full path to that file with the following command:
ardana >
find /srv -type f -name "*amphora-image*"For example:
/srv/www/suse-12.4/x86_64/repos/Cloud/suse/noarch/openstack-octavia-amphora-image-x86_64-0.1.0-13.157.noarch.rpm
. The Amphora image may be updated via maintenance updates, which will change the filename.Switch to the
ansible
directory and register the image by giving the full path and name for the Amphora image as an argument toservice_package
, replacing the filename as needed to reflect an updated amphora image rpm:ardana >
cd ~/scratch/ansible/next/ardana/ansible/ardana >
ansible-playbook -i hosts/verb_hosts service-guest-image.yml \ -e service_package=\ /srv/www/suse-12.4/x86_64/repos/Cloud/suse/noarch/openstack-octavia-amphora-image-x86_64-0.1.0-13.157.noarch.rpmSource the service user (this can be done on a different computer)
ardana >
source service.osrcVerify that the image was registered (this can be done on a computer with access to the glance CLI client)
ardana >
openstack image list +--------------------------------------+------------------------+--------- | ID | Name | Status | +--------------------------------------+------------------------+--------+ ... | 1d4dd309-8670-46b6-801d-3d6af849b6a9 | octavia-amphora-x86_64 | active | ...ImportantIn the example above, the status of the
octavia-amphora-x86_64
image is active which means the image was successfully registered. If a status of the images is queued, you need to run the image registration again.If you run the registration by accident, the system will only upload a new image if the underlying image has been changed.
Ensure there are not multiple amphora images listed. Execute following command:
ardana >
openstack image list --tag amphoraIf the above command returns more than one image, unset the old amphora image. To unset the image execute following command:
ardana >
openstack image unset --tag amphora <oldimageid>If you have already created load balancers, they will not receive the new image. Only load balancers created after the image has been successfully installed will use the new image. If existing load balancers need to be switched to the new image, please follow the instructions in Section 10.4.9.2, “Tuning Octavia Installation”.
Setup network, subnet, router, security and IP's
If you have already created a network, subnet, router, security settings and IPs you can skip the following steps and go directly to creating the load balancers. In this example we will boot two VMs running web servers and then use curl scripts to demonstrate load balancing between the two VMs. Along the way we will save important parameters to a bash variable and use them later in the process.
Make a tenant network for load balancer clients (the web servers):
ardana >
openstack network create lb_net2 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | | | created_at | 2019-10-23T13:27:57Z | | description | | | dns_domain | | | id | 50a66468-084b-457f-88e4-2edb7b81851e | | ipv4_address_scope | None | | ipv6_address_scope | None | | is_default | False | | is_vlan_transparent | None | | mtu | 1450 | | name | lb_net2 | | port_security_enabled | False | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | provider:network_type | vxlan | | provider:physical_network | None | | provider:segmentation_id | 1080 | | qos_policy_id | None | | revision_number | 1 | | router:external | Internal | | segments | None | | shared | False | | status | ACTIVE | | subnets | | | tags | | | updated_at | 2019-10-23T13:27:57Z | +---------------------------+--------------------------------------+Create a subnet for the load balancing clients on the tenant network:
ardana >
openstack subnet create --network lb_net2 --subnet-range 12.12.12.0/24 +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 12.12.12.2-12.12.12.254 | | cidr | 12.12.12.0/24 | | created_at | 2019-10-23T13:29:45Z | | description | | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 12.12.12.1 | | host_routes | | | id | c141858a-a792-4c89-91f0-de4dc4694a7f | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | lb_subnet2 | | network_id | 50a66468-084b-457f-88e4-2edb7b81851e | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | revision_number | 0 | | segment_id | None | | service_types | | | subnetpool_id | None | | tags | | | updated_at | 2019-10-23T13:29:45Z | +-------------------+--------------------------------------+Save the ID of the tenant subnet. The load balancer will be attached to it later.
ardana >
VIP_SUBNET="c141858a-a792-4c89-91f0-de4dc4694a7f"Create a router for client VMs:
ardana >
openstack router create --centralized lb_router2 +-------------------------+--------------------------------------+ | Field | Value | +-------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | | | created_at | 2019-10-23T13:30:21Z | | description | | | distributed | False | | external_gateway_info | None | | flavor_id | None | | ha | False | | id | 1c9949fb-a500-475d-8694-346cf66ebf9a | | name | lb_router2 | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | revision_number | 0 | | routes | | | status | ACTIVE | | tags | | | updated_at | 2019-10-23T13:30:21Z | +-------------------------+--------------------------------------+Add the subnet to the router and connect it to the external gateway:
ardana >
openstack router add subnet lb_router2 lb_subnet2Set gateway for the router. The name of the external network is:
ardana >
openstack network list +--------------------------------------+------------------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+------------------+--------------------------------------+ | 2ed7deca-81ed-45ee-87ce-aeb4f565d3ad | ext-net | 45d31556-08b8-4b0e-98d8-c09dd986296a | | 50a66468-084b-457f-88e4-2edb7b81851e | lb_net2 | c141858a-a792-4c89-91f0-de4dc4694a7f | | 898975bd-e3cc-4afd-8605-3c5606fd5c54 | OCTAVIA-MGMT-NET | 68eed774-c07f-45bb-b37a-489515108acb | +--------------------------------------+------------------+--------------------------------------+ EXT_NET="ext-net" openstack router set --external-gateway $EXT_NET lb_router2Check the router:
ardana >
openstack router show lb_router2 --fit-width +-------------------------+-------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------------+-------------------------------------------------------------------------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | nova | | created_at | 2019-10-23T13:30:21Z | | description | | | distributed | False | | external_gateway_info | {"network_id": "2ed7deca-81ed-45ee-87ce-aeb4f565d3ad", "enable_snat": true, "external_fixed_ips": | | | [{"subnet_id": "45d31556-08b8-4b0e-98d8-c09dd986296a", "ip_address": "10.84.57.38"}]} | | flavor_id | None | | ha | False | | id | 1c9949fb-a500-475d-8694-346cf66ebf9a | | interfaces_info | [{"subnet_id": "c141858a-a792-4c89-91f0-de4dc4694a7f", "ip_address": "12.12.12.1", "port_id": | | | "3d8a8605-dbe5-4fa6-87c3-b351763a9a63"}] | | name | lb_router2 | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | revision_number | 3 | | routes | | | status | ACTIVE | | tags | | | updated_at | 2019-10-23T13:33:22Z | +-------------------------+-------------------------------------------------------------------------------------------------------+Create a security group for testing with unrestricted inbound and outbound traffic (you will have to restrict the traffic later):
ardana >
openstack security group create letmein +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+ | created_at | 2019-05-13T18:50:28Z | | description | letmein | | id | 1d136bc8-8186-4c08-88f4-fe83465f3c30 | | name | letmein | | project_id | 5bd04b76d1db42988612e5f27170a40a | | revision_number | 2 | | rules | created_at='2019-05-13T18:50:28Z', direction='egress', ethertype='IPv6', id='1f73b026-5f7e-4b2c-9a72-07c88d8ea82d', updated_at='2019-05-13T18:50:28Z' | | | created_at='2019-05-13T18:50:28Z', direction='egress', ethertype='IPv4', id='880a04ee-14b4-4d05-b038-6dd42005d3c0', updated_at='2019-05-13T18:50:28Z' | | updated_at | 2019-05-13T18:50:28Z | +-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+Create security rules for the VMs:
ardana >
openstack security group rule create letmein --ingress --protocol tcp --dst-port 1:1024ardana >
openstack security group rule create letmein --ingress --protocol icmpardana >
openstack security group rule create letmein --egress --protocol tcp --dst-port 1:1024ardana >
openstack security group rule create letmein --egress --protocol icmpIf you have not already created a keypair for the web servers, create one now with the
openstack keypair create
. You will use the keypair to boot images.ardana >
openstack keypair create lb_kp1 > lb_kp1.pem chmod 400 lb_kp1.pem ls -la lb* -r-------- 1 ardana ardana 1680 May 13 12:52 lb_kp1.pemVerify the keypair list:
ardana >
openstack keypair list +---------------+-------------------------------------------------+ | Name | Fingerprint | +---------------+-------------------------------------------------+ | id_rsa2 | d3:82:4c:fc:80:79:db:94:b6:31:f1:15:8e:ba:35:0b | | lb_kp1 | 78:a8:0b:5b:2d:59:f4:68:4c:cb:49:c3:f8:81:3e:d8 | +---------------+-------------------------------------------------+List the image names available to boot and boot two VMs:
ardana >
openstack image list | grep cirros | faa65f54-e38b-43dd-b0db-ae5f5e3d9b83 | cirros-0.4.0-x86_64 | active |Boot a VM that will host the pseudo web server:
ardana >
openstack server create --flavor m1.tiny --image cirros-0.4.0-x86_64 \ --key-name id_rsa2 --security-group letmein --nic net-id=lb_net2 lb2_vm1 --wait +-------------------------------------+------------------------------------------------------------+ | Field | Value | +-------------------------------------+------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | cjd9-cp1-comp0001-mgmt | | OS-EXT-SRV-ATTR:hypervisor_hostname | cjd9-cp1-comp0001-mgmt | | OS-EXT-SRV-ATTR:instance_name | instance-0000001f | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2019-10-23T13:36:32.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | lb_net2=12.12.12.13 | | adminPass | 79GcsJQfNt3p | | config_drive | | | created | 2019-10-23T13:36:17Z | | flavor | m1.tiny (1) | | hostId | ed0511cb20d44f420e707fbc801643e0658eb2efbf980ef986aa45d0 | | id | 874e675b-727b-44a2-99a9-0ff178590f86 | | image | cirros-0.4.0-x86_64 (96e0df13-573f-4672-97b9-3fe67ff84d6a) | | key_name | id_rsa2 | | name | lb2_vm1 | | progress | 0 | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | properties | | | security_groups | name='letmein' | | status | ACTIVE | | updated | 2019-10-23T13:36:32Z | | user_id | ec4d4dbf03be4bf09f105c06c7562ba7 | | volumes_attached | | +-------------------------------------+------------------------------------------------------------+Save the private IP of the first web server:
ardana >
IP_LBVM1="12.12.12.13"Boot a second pseudo web server:
ardana >
openstack server create --flavor m1.tiny --image cirros-0.4.0-x86_64 \ --key-name id_rsa2 --security-group letmein --nic net-id=lb_net2 lb2_vm2 --wait +-------------------------------------+------------------------------------------------------------+ | Field | Value | +-------------------------------------+------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | cjd9-cp1-comp0002-mgmt | | OS-EXT-SRV-ATTR:hypervisor_hostname | cjd9-cp1-comp0002-mgmt | | OS-EXT-SRV-ATTR:instance_name | instance-00000022 | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2019-10-23T13:38:09.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | lb_net2=12.12.12.5 | | adminPass | CGDs3mnHcnv8 | | config_drive | | | created | 2019-10-23T13:37:55Z | | flavor | m1.tiny (1) | | hostId | bf4424315eb006ccb91f80e1024b2617cbb86a2cc70cf912b6ca1f95 | | id | a60dfb93-255b-476d-bf28-2b4f0da285e0 | | image | cirros-0.4.0-x86_64 (96e0df13-573f-4672-97b9-3fe67ff84d6a) | | key_name | id_rsa2 | | name | lb2_vm2 | | progress | 0 | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | properties | | | security_groups | name='letmein' | | status | ACTIVE | | updated | 2019-10-23T13:38:09Z | | user_id | ec4d4dbf03be4bf09f105c06c7562ba7 | | volumes_attached | | +-------------------------------------+------------------------------------------------------------+Save the private IP of the second web server:
ardana >
IP_LBVM2="12.12.12.5"Verify that all the servers are running:
ardana >
openstack server list +--------------------------------------+---------+--------+----------------------------------+---------------------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+---------+--------+----------------------------------+---------------------+---------+ | a60dfb93-255b-476d-bf28-2b4f0da285e0 | lb2_vm2 | ACTIVE | lb_net2=12.12.12.5 | cirros-0.4.0-x86_64 | m1.tiny | | 874e675b-727b-44a2-99a9-0ff178590f86 | lb2_vm1 | ACTIVE | lb_net2=12.12.12.13 | cirros-0.4.0-x86_64 | m1.tiny | +--------------------------------------+---------+--------+----------------------------------+---------------------+---------+Check ports for the VMs first (.13 and .5):
ardana >
openstack port list | grep -e $IP_LBVM2 -e $IP_LBVM1 | 42c356f1-8c49-469a-99c8-12c541378741 || fa:16:3e:2d:6b:be | ip_address='12.12.12.13', subnet_id='c141858a-a792-4c89-91f0-de4dc4694a7f' | ACTIVE | | fb7166c1-06f7-4457-bc0d-4692fc2c7fa2 || fa:16:3e:ac:5d:82 | ip_address='12.12.12.5', subnet_id='c141858a-a792-4c89-91f0-de4dc4694a7f' | ACTIVE |Create the floating IPs for the VMs:
ardana >
openstack floating ip create $EXT_NET --port fb7166c1-06f7-4457-bc0d-4692fc2c7fa2 openstack floating ip create $EXT_NET --port 42c356f1-8c49-469a-99c8-12c541378741Verify the floating IP attachment:
ardana >
openstack server list +--------------------------------------+---------+--------+----------------------------------+---------------------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+---------+--------+----------------------------------+---------------------+---------+ | a60dfb93-255b-476d-bf28-2b4f0da285e0 | lb2_vm2 | ACTIVE | lb_net2=12.12.12.5, 10.84.57.35 | cirros-0.4.0-x86_64 | m1.tiny | | 874e675b-727b-44a2-99a9-0ff178590f86 | lb2_vm1 | ACTIVE | lb_net2=12.12.12.13, 10.84.57.39 | cirros-0.4.0-x86_64 | m1.tiny | +--------------------------------------+---------+--------+----------------------------------+---------------------+---------+Save the floating IPs of the web server VMs:
ardana >
LB_VM2_FIP="10.84.57.35"ardana >
LB_VM1_FIP="10.84.57.39"Use the
ping
command on the VMs to verify that they respond on their floating IPs:ardana >
ping $LB_VM2_FIP PING 10.84.57.35 (10.84.57.35) 56(84) bytes of data. 64 bytes from 10.84.57.35: icmp_seq=1 ttl=62 time=2.09 ms 64 bytes from 10.84.57.35: icmp_seq=2 ttl=62 time=0.542 ms ^C ping $LB_VM1_FIP PING 10.84.57.39 (10.84.57.39) 56(84) bytes of data. 64 bytes from 10.84.57.39: icmp_seq=1 ttl=62 time=1.55 ms 64 bytes from 10.84.57.39: icmp_seq=2 ttl=62 time=0.552 ms ^C
43.5 Create Load Balancers #
The following steps will set up new Octavia Load Balancers.
The following examples assume names and values from the previous section.
Create the load balancer on the tenant network. Use the bash variable
VIP_SUBNET
that holds the ID of the subnet that was saved earlier, or the actual subnet ID:ardana >
openstack loadbalancer create --name lb2 --vip-subnet-id $VIP_SUBNET +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2019-10-29T16:28:03 | | description | | | flavor | | | id | 2b0660e1-2901-41ea-93d8-d1fa590b9cfd | | listeners | | | name | lb2 | | operating_status | OFFLINE | | pools | | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | provider | amphora | | provisioning_status | PENDING_CREATE | | updated_at | None | | vip_address | 12.12.12.6 | | vip_network_id | 50a66468-084b-457f-88e4-2edb7b81851e | | vip_port_id | 5122c1d0-2996-43d5-ad9b-7b3b2c5903d5 | | vip_qos_policy_id | None | | vip_subnet_id | c141858a-a792-4c89-91f0-de4dc4694a7f | +---------------------+--------------------------------------+Save the load balancer
vip_port_id
. This is used when attaching a floating IP to the load balancer.LB_PORT="5122c1d0-2996-43d5-ad9b-7b3b2c5903d5"
List load balancers. You will need to wait until the load balancer
provisioning_status
isACTIVE
before proceeding to the next step.ardana >
openstack loadbalancer list +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+ | 2b0660e1-2901-41ea-93d8-d1fa590b9cfd | lb2 | de095070f242416cb3dc4cd00e3c79f7 | 12.12.12.6 | PENDING_CREATE | amphora | +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+After some time has passed, the status will change to
ACTIVE
.ardana >
openstack loadbalancer list +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+ | 2b0660e1-2901-41ea-93d8-d1fa590b9cfd | lb2 | de095070f242416cb3dc4cd00e3c79f7 | 12.12.12.6 | ACTIVE | amphora | +--------------------------------------+------+----------------------------------+-------------+---------------------+----------+Once the load balancer is created, create the listener. This may take some time.
ardana >
openstack loadbalancer listener create --protocol HTTP \ --protocol-port=80 --name lb2_listener lb2 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | connection_limit | -1 | | created_at | 2019-10-29T16:32:26 | | default_pool_id | None | | default_tls_container_ref | None | | description | | | id | 65930b35-70bf-47d2-a135-aff49c219222 | | insert_headers | None | | l7policies | | | loadbalancers | 2b0660e1-2901-41ea-93d8-d1fa590b9cfd | | name | lb2_listener | | operating_status | OFFLINE | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | protocol | HTTP | | protocol_port | 80 | | provisioning_status | PENDING_CREATE | | sni_container_refs | [] | | timeout_client_data | 50000 | | timeout_member_connect | 5000 | | timeout_member_data | 50000 | | timeout_tcp_inspect | 0 | | updated_at | None | +---------------------------+--------------------------------------+Create the load balancing pool. During the creation of the load balancing pool, the status for the load balancer goes to
PENDING_UPDATE
. Useopenstack loadbalancer pool show
to watch for the change toACTIVE
. Once the load balancer returns toACTIVE
, proceed with the next step.ardana >
openstack loadbalancer pool create --lb-algorithm ROUND_ROBIN \ --protocol HTTP --listener lb2_listener --name lb2_pool +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2019-10-29T16:35:06 | | description | | | healthmonitor_id | | | id | 75cd42fa-0525-421f-afaa-5de996267536 | | lb_algorithm | ROUND_ROBIN | | listeners | 65930b35-70bf-47d2-a135-aff49c219222 | | loadbalancers | 2b0660e1-2901-41ea-93d8-d1fa590b9cfd | | members | | | name | lb2_pool | | operating_status | OFFLINE | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | protocol | HTTP | | provisioning_status | PENDING_CREATE | | session_persistence | None | | updated_at | None | +---------------------+--------------------------------------+Wait for the status to change to
ACTIVE
.ardana >
openstack loadbalancer pool show lb2_pool +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2019-10-29T16:35:06 | | description | | | healthmonitor_id | | | id | 75cd42fa-0525-421f-afaa-5de996267536 | | lb_algorithm | ROUND_ROBIN | | listeners | 65930b35-70bf-47d2-a135-aff49c219222 | | loadbalancers | 2b0660e1-2901-41ea-93d8-d1fa590b9cfd | | members | | | name | lb2_pool | | operating_status | ONLINE | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | protocol | HTTP | | provisioning_status | ACTIVE | | session_persistence | None | | updated_at | 2019-10-29T16:35:10 | +---------------------+--------------------------------------+Add the private IPs of the tenant network for the first web server in the pool:
In the previous section the addresses for the VMs were saved in the
IP_LBVM1
and IP_LBVM2 bash variables. Use those variables or the or the literal addresses of the VMs.ardana >
openstack loadbalancer member create --subnet $VIP_SUBNET \ --address $IP_LBVM1 --protocol-port 80 lb2_pool +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 12.12.12.13 | | admin_state_up | True | | created_at | 2019-10-29T16:39:02 | | id | 3012cfa2-359c-48c1-8b6f-c650a47c2b73 | | name | | | operating_status | NO_MONITOR | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | protocol_port | 80 | | provisioning_status | PENDING_CREATE | | subnet_id | c141858a-a792-4c89-91f0-de4dc4694a7f | | updated_at | None | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+Put the addresses the VMs have on the tenant network into the load balancer pool:
ardana >
openstack loadbalancer member create --subnet $VIP_SUBNET \ --address $IP_LBVM2 --protocol-port 80 lb2_pool +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 12.12.12.5 | | admin_state_up | True | | created_at | 2019-10-29T16:39:59 | | id | 3dab546b-e3ea-46b3-96f2-a7463171c2b9 | | name | | | operating_status | NO_MONITOR | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | protocol_port | 80 | | provisioning_status | PENDING_CREATE | | subnet_id | c141858a-a792-4c89-91f0-de4dc4694a7f | | updated_at | None | | weight | 1 | | monitor_port | None | | monitor_address | None | | backup | False | +---------------------+--------------------------------------+Verify the configuration:
ardana >
openstack loadbalancer list --fit-width +---------------------------+------+---------------------------+-------------+---------------------+----------+ | id | name | project_id | vip_address | provisioning_status | provider | +---------------------------+------+---------------------------+-------------+---------------------+----------+ | 2b0660e1-2901-41ea- | lb2 | de095070f242416cb3dc4cd00 | 12.12.12.6 | ACTIVE | amphora | | 93d8-d1fa590b9cfd | | e3c79f7 | | | | +---------------------------+------+---------------------------+-------------+---------------------+----------+ardana >
openstack loadbalancer show lb2 --fit-width +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | admin_state_up | True | | created_at | 2019-10-29T16:28:03 | | description | | | flavor | | | id | 2b0660e1-2901-41ea-93d8-d1fa590b9cfd | | listeners | 65930b35-70bf-47d2-a135-aff49c219222 | | name | lb2 | | operating_status | ONLINE | | pools | 75cd42fa-0525-421f-afaa-5de996267536 | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | provider | amphora | | provisioning_status | ACTIVE | | updated_at | 2019-10-29T16:40:01 | | vip_address | 12.12.12.6 | | vip_network_id | 50a66468-084b-457f-88e4-2edb7b81851e | | vip_port_id | 5122c1d0-2996-43d5-ad9b-7b3b2c5903d5 | | vip_qos_policy_id | None | | vip_subnet_id | c141858a-a792-4c89-91f0-de4dc4694a7f | +---------------------+--------------------------------------+ardana >
openstack loadbalancer listener list --fit-width +-----------------+-----------------+--------------+-----------------+----------+---------------+----------------+ | id | default_pool_id | name | project_id | protocol | protocol_port | admin_state_up | +-----------------+-----------------+--------------+-----------------+----------+---------------+----------------+ | 65930b35-70bf-4 | 75cd42fa-0525 | lb2_listener | de095070f242416 | HTTP | 80 | True | | 7d2-a135-aff49c | -421f-afaa- | | cb3dc4cd00e3c79 | | | | | 219222 | 5de996267536 | | f7 | | | | +-----------------+-----------------+--------------+-----------------+----------+---------------+----------------+ardana >
openstack loadbalancer listener show lb2_listener --fit-width +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | connection_limit | -1 | | created_at | 2019-10-29T16:32:26 | | default_pool_id | 75cd42fa-0525-421f-afaa-5de996267536 | | default_tls_container_ref | None | | description | | | id | 65930b35-70bf-47d2-a135-aff49c219222 | | insert_headers | None | | l7policies | | | loadbalancers | 2b0660e1-2901-41ea-93d8-d1fa590b9cfd | | name | lb2_listener | | operating_status | ONLINE | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | protocol | HTTP | | protocol_port | 80 | | provisioning_status | ACTIVE | | sni_container_refs | [] | | timeout_client_data | 50000 | | timeout_member_connect | 5000 | | timeout_member_data | 50000 | | timeout_tcp_inspect | 0 | | updated_at | 2019-10-29T16:40:01 | +---------------------------+--------------------------------------+
43.6 Create Floating IPs for Load Balancer #
To create the floating IP's for the load balancer, you will need to list
the ports for the load balancer. Once you have the port ID, you can then
create the floating IP. Notice that the name has changed between the neutron
lbass-loadbalancer and openstack loadbalancer CLI. In this case, be sure
to pick the port associated with the name octavia-lb-
or use the port ID that was saved when the load balancer was created.
Search for the load balancers port, it will be used when the floating IP is created. Alternately, use the bash variable that contains the load balancer port from the previous section. Verify the port that has the address for the load balancer:
ardana >
openstack port list --fit-width +----------------------------+----------------------------+-------------------+-----------------------------+--------+ | ID | Name | MAC Address | Fixed IP Addresses | Status | +----------------------------+----------------------------+-------------------+-----------------------------+--------+ | 5122c1d0-2996-43d5-ad9b- | octavia-lb-2b0660e1-2901 | fa:16:3e:6a:59:f3 | ip_address='12.12.12.6', su | DOWN | | 7b3b2c5903d5 | -41ea-93d8-d1fa590b9cfd | | bnet_id='c141858a-a792-4c89 | | | | | | -91f0-de4dc4694a7f' | |Create the floating IP for the load balancers port:
ardana >
openstack floating ip create --port $LB_PORT $EXT_NET --fit-width +---------------------+-------------------------------------------------------------------------------------------------+ | Field | Value | +---------------------+-------------------------------------------------------------------------------------------------+ | created_at | 2019-10-29T16:59:43Z | | description | | | dns_domain | | | dns_name | | | fixed_ip_address | 12.12.12.6 | | floating_ip_address | 10.84.57.29 | | floating_network_id | 2ed7deca-81ed-45ee-87ce-aeb4f565d3ad | | id | f4f18854-9d92-4eb4-8c05-96ef059b4a41 | | name | 10.84.57.29 | | port_details | {u'status': u'DOWN', u'name': u'octavia-lb-2b0660e1-2901-41ea-93d8-d1fa590b9cfd', | | | u'admin_state_up': False, u'network_id': u'50a66468-084b-457f-88e4-2edb7b81851e', | | | u'device_owner': u'Octavia', u'mac_address': u'fa:16:3e:6a:59:f3', u'device_id': u'lb- | | | 2b0660e1-2901-41ea-93d8-d1fa590b9cfd'} | | port_id | 5122c1d0-2996-43d5-ad9b-7b3b2c5903d5 | | project_id | de095070f242416cb3dc4cd00e3c79f7 | | qos_policy_id | None | | revision_number | 0 | | router_id | 1c9949fb-a500-475d-8694-346cf66ebf9a | | status | DOWN | | subnet_id | None | | tags | [] | | updated_at | 2019-10-29T16:59:43Z | +---------------------+-------------------------------------------------------------------------------------------------+Save the floating IP for the load balancer for future use.
ardana >
LB_FIP="10.84.57.29"
43.7 Testing the Octavia Load Balancer #
This is the web server code that runs on the VMs created earlier. In this example, the web server is on port 80.
echo <<EOF >webserver.sh #!/bin/sh MYIP=$(/sbin/ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print $1}'); while true; do echo -e "HTTP/1.0 200 OK Welcome to $MYIP" | sudo nc -l -p 80 done EOF
Remove any old floating IPs from
known_hosts
:ardana >
ssh-keygen -R $LB_VM1_FIPardana >
ssh-keygen -R $LB_VM2_FIPDeploy the web server application:
ardana >
scp -o StrictHostKeyChecking=no -i lb_kp1.pem webserver.sh cirros@$LB_VM1_FIP:webserver.sh ssh -o StrictHostKeyChecking=no -i lb_kp1.pem cirros@$LB_VM1_FIP 'chmod +x ./webserver.sh' ssh -o StrictHostKeyChecking=no -i lb_kp1.pem cirros@$LB_VM1_FIP '(./webserver.sh&)' scp -o StrictHostKeyChecking=no -i lb_kp1.pem webserver.sh cirros@$LB_VM2_FIP:webserver.sh ssh -o StrictHostKeyChecking=no -i lb_kp1.pem cirros@$LB_VM2_FIP 'chmod +x ./webserver.sh' ssh -o StrictHostKeyChecking=no -i lb_kp1.pem cirros@$LB_VM2_FIP '(./webserver.sh&)'Make sure the web servers respond with the correct IPs:
ardana >
curl $LB_VM1_FIP Welcome to 12.12.12.5 curl $LB_VM2_FIP Welcome to 12.12.12.5Access the web servers through the Octavia load balancer using the floating IP:
ardana >
curl $LB_FIP Welcome to 12.12.12.5ardana >
curl $LB_FIP Welcome to 12.12.12.13ardana >
curl $LB_FIP Welcome to 12.12.12.5ardana >
curl $LB_FIP