12 Managing Orchestration #
Information about managing and configuring the Orchestration service, based on OpenStack heat.
12.1 Configuring the Orchestration Service #
Information about configuring the Orchestration service, based on OpenStack heat.
The Orchestration service, based on OpenStack heat, does not need any additional configuration to be used. This documenent describes some configuration options as well as reasons you may want to use them.
heat Stack Tag Feature
heat provides a feature called Stack Tags to allow attributing a set of simple string-based tags to stacks and optionally the ability to hide stacks with certain tags by default. This feature can be used for behind-the-scenes orchestration of cloud infrastructure, without exposing the cloud user to the resulting automatically-created stacks.
Additional details can be seen here: OpenStack - Stack Tags.
In order to use the heat stack tag feature, you need to use the following
steps to define the hidden_stack_tags
setting in the heat
configuration file and then reconfigure the service to enable the feature.
Log in to the Cloud Lifecycle Manager.
Edit the heat configuration file, at this location:
~/openstack/my_cloud/config/heat/heat.conf.j2
Under the
[DEFAULT]
section, add a line forhidden_stack_tags
. Example:[DEFAULT] hidden_stack_tags="<hidden_tag>"
Commit the changes to your local git:
ardana >
cd ~/openstack/ardana/ansibleardana >
git add --allardana >
git commit -m "enabling heat Stack Tag feature"Run the configuration processor:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost config-processor-run.ymlUpdate your deployment directory:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost ready-deployment.ymlReconfigure the Orchestration service:
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts heat-reconfigure.yml
To begin using the feature, use these steps to create a heat stack using the
defined hidden tag. You will need to use credentials that have the heat admin
permissions. In the example steps below we are going to do this from the
Cloud Lifecycle Manager using the admin
credentials and a heat
template named heat.yaml
:
Log in to the Cloud Lifecycle Manager.
Source the admin credentials:
ardana >
source ~/service.osrcCreate a heat stack using this feature:
ardana >
openstack stack create -f heat.yaml hidden_stack_tags --tags hiddenIf you list your heat stacks, your hidden one will not show unless you use the
--hidden
switch.Example, not showing hidden stacks:
ardana >
openstack stack listExample, showing the hidden stacks:
ardana >
openstack stack list --hidden
12.2 Autoscaling using the Orchestration Service #
Autoscaling is a process that can be used to scale up and down your compute resources based on the load they are currently experiencing to ensure a balanced load.
12.2.1 What is autoscaling? #
Autoscaling is a process that can be used to scale up and down your compute resources based on the load they are currently experiencing to ensure a balanced load across your compute environment.
Autoscaling is only supported for KVM.
12.2.2 How does autoscaling work? #
The monitoring service, monasca, monitors your infrastructure resources and generates alarms based on their state. The Orchestration service, heat, talks to the monasca API and offers the capability to templatize the existing monasca resources, which are the monasca Notification and monasca Alarm definition. heat can configure certain alarms for the infrastructure resources (compute instances and block storage volumes) it creates and can expect monasca to notify continuously if a certain evaluation pattern in an alarm definition is met.
For example, heat can tell monasca that it needs an alarm generated if the average CPU utilization of the compute instance in a scaling group goes beyond 90%.
As monasca continuously monitors all the resources in the cloud, if it happens to see a compute instance spiking above 90% load as configured by heat, it generates an alarm and in turn sends a notification to heat. Once heat is notified, it will execute an action that was preconfigured in the template. Commonly, this action will be a scale up to increase the number of compute instances to balance the load that is being taken by the compute instance scaling group.
monasca sends a notification every 60 seconds while the alarm is in the ALARM state.
12.2.3 Autoscaling template example #
The following monasca alarm definition template snippet is an example of
instructing monasca to generate an alarm if the average CPU utilization in a
group of compute instances exceeds beyond 50%. If the alarm is triggered, it
will invoke the up_notification
webhook once the alarm
evaluation expression is satisfied.
cpu_alarm_high: type: OS::monasca::AlarmDefinition properties: name: CPU utilization beyond 50 percent description: CPU utilization reached beyond 50 percent expression: str_replace: template: avg(cpu.utilization_perc{scale_group=scale_group_id}) > 50 times 3 params: scale_group_id: {get_param: "OS::stack_id"} severity: high alarm_actions: - {get_resource: up_notification }
The following monasca notification template snippet is an example of creating a monasca notification resource that will be used by the alarm definition snippet to notify heat.
up_notification: type: OS::monasca::Notification properties: type: webhook address: {get_attr: [scale_up_policy, alarm_url]}
12.2.4 monasca Agent configuration options #
There is a monasca Agent configuration option which controls the behavior around compute instance creation and the measurements being received from the compute instance.
The variable is monasca_libvirt_vm_probation
which is set
in the
~/openstack/my_cloud/config/nova/libvirt-monitoring.yml
file. Here is a snippet of the file showing the description and variable:
# The period of time (in seconds) in which to suspend metrics from a # newly-created VM. This is used to prevent creating and storing # quickly-obsolete metrics in an environment with a high amount of instance # churn (VMs created and destroyed in rapid succession). Setting to 0 # disables VM probation and metrics will be recorded as soon as possible # after a VM is created. Decreasing this value in an environment with a high # amount of instance churn can have a large effect on the total number of # metrics collected and increase the amount of CPU, disk space and network # bandwidth required for monasca. This value may need to be decreased if # heat Autoscaling is in use so that heat knows that a new VM has been # created and is handling some of the load. monasca_libvirt_vm_probation: 300
The default value is 300
. This is the time in seconds
that a compute instance must live before the monasca libvirt agent plugin
will send measurements for it. This is so that the monasca metrics database
does not fill with measurements from short lived compute instances. However,
this means that the monasca threshold engine will not see measurements from
a newly created compute instance for at least five minutes on scale up. If
the newly created compute instance is able to start handling the load in
less than five minutes, then heat autoscaling may mistakenly create another
compute instance since the alarm does not clear.
If the default monasca_libvirt_vm_probation
turns out to
be an issue, it can be lowered. However, that will affect all compute
instances, not just ones used by heat autoscaling which can increase the
number of measurements stored in monasca if there are many short lived
compute instances. You should consider how often compute instances are
created that live less than the new value of
monasca_libvirt_vm_probation
. If few, if any, compute
instances live less than the value of
monasca_libvirt_vm_probation
, then this value can be
decreased without causing issues. If many compute instances live less than
the monasca_libvirt_vm_probation
period, then decreasing
monasca_libvirt_vm_probation
can cause excessive disk,
CPU and memory usage by monasca.
If you wish to change this value, follow these steps:
Log in to the Cloud Lifecycle Manager.
Edit the
monasca_libvirt_vm_probation
value in this configuration file:~/openstack/my_cloud/config/nova/libvirt-monitoring.yml
Commit your changes to the local git:
ardana >
cd ~/openstack/ardana/ansibleardana >
git add --allardana >
git commit -m "changing monasca Agent configuration option"Run the configuration processor:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost config-processor-run.ymlUpdate your deployment directory:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost ready-deployment.ymlRun this playbook to reconfigure the nova service and enact your changes:
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts nova-reconfigure.yml
12.3 Orchestration Service support for LBaaS v2 #
In SUSE OpenStack Cloud, the Orchestration service provides support for LBaaS v2, which means users can create LBaaS v2 resources using Orchestration.
The OpenStack documentation for LBaaSv2 resource plugins is available at following locations.
neutron LBaaS v2 LoadBalancer: http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LBaaS::LoadBalancer
neutron LBaaS v2 Listener: http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LBaaS::Listener
neutron LBaaS v2 Pool: http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LBaaS::Pool
neutron LBaaS v2 Pool Member: http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LBaaS::PoolMember
neutron LBaaS v2 Health Monitor: http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::LBaaS::HealthMonitor
12.3.1 Limitations #
In order to avoid stack-create timeouts when using load balancers, it is recommended that no more than 100 load balancers be created at a time using stack-create loops. Larger numbers of load balancers could reach quotas and/or exhaust resources resulting in the stack create-timeout.
12.3.2 More Information #
For more information on the neutron command-line interface (CLI) and load balancing, see the OpenStack networking command-line client reference: http://docs.openstack.org/cli-reference/content/neutronclient_commands.html
For more information on heat see: http://docs.openstack.org/developer/heat