Applies to HPE Helion OpenStack 8

11 Managing Orchestration

Information about managing and configuring the Orchestration service, based on OpenStack Heat.

11.1 Configuring the Orchestration Service

Information about configuring the Orchestration service, based on OpenStack Heat.

The Orchestration service, based on OpenStack Heat, does not need any additional configuration to be used. This documenent describes some configuration options as well as reasons you may want to use them.

Heat Stack Tag Feature

Heat provides a feature called Stack Tags to allow attributing a set of simple string-based tags to stacks and optionally the ability to hide stacks with certain tags by default. This feature can be used for behind-the-scenes orchestration of cloud infrastructure, without exposing the cloud user to the resulting automatically-created stacks.

Additional details can be seen here: OpenStack - Stack Tags.

In order to use the Heat stack tag feature, you need to use the following steps to define the hidden_stack_tags setting in the Heat configuration file and then reconfigure the service to enable the feature.

  1. Log in to the Cloud Lifecycle Manager.

  2. Edit the Heat configuration file, at this location:

    ~/openstack/my_cloud/config/heat/heat.conf.j2
  3. Under the [DEFAULT] section, add a line for hidden_stack_tags. Example:

    [DEFAULT]
    hidden_stack_tags="<hidden_tag>"
  4. Commit the changes to your local git:

    ardana > cd ~/openstack/ardana/ansible
    ardana > git add --all
    ardana > git commit -m "enabling Heat Stack Tag feature"
  5. Run the configuration processor:

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost config-processor-run.yml
  6. Update your deployment directory:

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost ready-deployment.yml
  7. Reconfigure the Orchestration service:

    ardana > cd ~/scratch/ansible/next/ardana/ansible
    ardana > ansible-playbook -i hosts/verb_hosts heat-reconfigure.yml

To begin using the feature, use these steps to create a Heat stack using the defined hidden tag. You will need to use credentials that have the Heat admin permissions. In the example steps below we are going to do this from the Cloud Lifecycle Manager using the admin credentials and a Heat template named heat.yaml:

  1. Log in to the Cloud Lifecycle Manager.

  2. Source the admin credentials:

    ardana > source ~/service.osrc
  3. Create a Heat stack using this feature:

    ardana > openstack stack create -f heat.yaml hidden_stack_tags --tags hidden
  4. If you list your Heat stacks, your hidden one will not show unless you use the --hidden switch.

    Example, not showing hidden stacks:

    ardana > openstack stack list

    Example, showing the hidden stacks:

    ardana > openstack stack list --hidden

11.2 Autoscaling using the Orchestration Service

Autoscaling is a process that can be used to scale up and down your compute resources based on the load they are currently experiencing to ensure a balanced load.

11.2.1 What is autoscaling?

Autoscaling is a process that can be used to scale up and down your compute resources based on the load they are currently experiencing to ensure a balanced load across your compute environment.

Important
Important

Autoscaling is only supported for KVM.

11.2.2 How does autoscaling work?

The monitoring service, Monasca, monitors your infrastructure resources and generates alarms based on their state. The orchestration service, Heat, talks to the Monasca API and offers the capability to templatize the existing Monasca resources, which are the Monasca Notification and Monasca Alarm definition. Heat can configure certain alarms for the infrastructure resources (compute instances and block storage volumes) it creates and can expect Monasca to notify continuously if a certain evaluation pattern in an alarm definition is met.

For example, Heat can tell Monasca that it needs an alarm generated if the average CPU utilization of the compute instance in a scaling group goes beyond 90%.

As Monasca continuously monitors all the resources in the cloud, if it happens to see a compute instance spiking above 90% load as configured by Heat, it generates an alarm and in turn sends a notification to Heat. Once Heat is notified, it will execute an action that was preconfigured in the template. Commonly, this action will be a scale up to increase the number of compute instances to balance the load that is being taken by the compute instance scaling group.

Monasca sends a notification every 60 seconds while the alarm is in the ALARM state.

11.2.3 Autoscaling template example

The following Monasca alarm definition template snippet is an example of instructing Monasca to generate an alarm if the average CPU utilization in a group of compute instances exceeds beyond 50%. If the alarm is triggered, it will invoke the up_notification webhook once the alarm evaluation expression is satisfied.

cpu_alarm_high:
  type: OS::Monasca::AlarmDefinition
  properties:
    name: CPU utilization beyond 50 percent
    description: CPU utilization reached beyond 50 percent
    expression:
    str_replace:
    template: avg(cpu.utilization_perc{scale_group=scale_group_id}) > 50 times 3
    params:
    scale_group_id: {get_param: "OS::stack_id"}
    severity: high
    alarm_actions:
      - {get_resource: up_notification }

The following Monasca notification template snippet is an example of creating a Monasca notification resource that will be used by the alarm definition snippet to notify Heat.

up_notification:
  type: OS::Monasca::Notification
  properties:
    type: webhook
    address: {get_attr: [scale_up_policy, alarm_url]}

11.2.4 Monasca Agent configuration options

There is a Monasca Agent configuration option which controls the behavior around compute instance creation and the measurements being received from the compute instance.

The variable is monasca_libvirt_vm_probation which is set in the ~/openstack/my_cloud/config/nova/libvirt-monitoring.yml file. Here is a snippet of the file showing the description and variable:

# The period of time (in seconds) in which to suspend metrics from a
# newly-created VM. This is used to prevent creating and storing
# quickly-obsolete metrics in an environment with a high amount of instance
# churn (VMs created and destroyed in rapid succession).  Setting to 0
# disables VM probation and metrics will be recorded as soon as possible
# after a VM is created.  Decreasing this value in an environment with a high
# amount of instance churn can have a large effect on the total number of
# metrics collected and increase the amount of CPU, disk space and network
# bandwidth required for Monasca. This value may need to be decreased if
# Heat Autoscaling is in use so that Heat knows that a new VM has been
# created and is handling some of the load.
monasca_libvirt_vm_probation: 300

The default value is 300. This is the time in seconds that a compute instance must live before the Monasca libvirt agent plugin will send measurements for it. This is so that the Monasca metrics database does not fill with measurements from short lived compute instances. However, this means that the Monasca threshold engine will not see measurements from a newly created compute instance for at least five minutes on scale up. If the newly created compute instance is able to start handling the load in less than five minutes, then Heat autoscaling may mistakenly create another compute instance since the alarm does not clear.

If the default monasca_libvirt_vm_probation turns out to be an issue, it can be lowered. However, that will affect all compute instances, not just ones used by Heat autoscaling which can increase the number of measurements stored in Monasca if there are many short lived compute instances. You should consider how often compute instances are created that live less than the new value of monasca_libvirt_vm_probation. If few, if any, compute instances live less than the value of monasca_libvirt_vm_probation, then this value can be decreased without causing issues. If many compute instances live less than the monasca_libvirt_vm_probation period, then decreasing monasca_libvirt_vm_probation can cause excessive disk, CPU and memory usage by Monasca.

If you wish to change this value, follow these steps:

  1. Log in to the Cloud Lifecycle Manager.

  2. Edit the monasca_libvirt_vm_probation value in this configuration file:

    ~/openstack/my_cloud/config/nova/libvirt-monitoring.yml
  3. Commit your changes to the local git:

    ardana > cd ~/openstack/ardana/ansible
    ardana > git add --all
    ardana > git commit -m "changing Monasca Agent configuration option"
  4. Run the configuration processor:

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost config-processor-run.yml
  5. Update your deployment directory:

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost ready-deployment.yml
  6. Run this playbook to reconfigure the Nova service and enact your changes:

    ardana > cd ~/scratch/ansible/next/ardana/ansible
    ardana > ansible-playbook -i hosts/verb_hosts nova-reconfigure.yml
Print this page