Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Linux Enterprise High Availability Extension 11 SP4

6 Configuring and Managing Cluster Resources (GUI)

This chapter introduces the Pacemaker GUI and covers basic tasks needed when configuring and managing cluster resources: modifying global cluster options, creating basic and advanced types of resources (groups and clones), configuring constraints, specifying failover nodes and failback nodes, configuring resource monitoring, starting, cleaning up or removing resources, and migrating resources manually.

Support for the GUI is provided by two packages: The pacemaker-mgmt package contains the back-end for the GUI (the mgmtd daemon). It must be installed on all cluster nodes you want to connect to with the GUI. On any machine where you want to run the GUI, install the pacemaker-mgmt-client package.

Note
Note: User Authentication

To log in to the cluster from the Pacemaker GUI, the respective user must be a member of the haclient group. The installation creates a linux user named hacluster and adds the user to the haclient group.

Before using the Pacemaker GUI, either set a password for the hacluster user or create a new user which is member of the haclient group.

Do this on every node you will connect to with the Pacemaker GUI.

6.1 Pacemaker GUI—Overview

To start the Pacemaker GUI, enter crm_gui at the command line. To access the configuration and administration options, you need to log in to a cluster.

6.1.1 Logging in to a Cluster

To connect to the cluster, select Connection › Login. By default, the Server field shows the localhost's IP address and hacluster as User Name. Enter the user's password to continue.

Connecting to the Cluster
Figure 6.1: Connecting to the Cluster

If you are running the Pacemaker GUI remotely, enter the IP address of a cluster node as Server. As User Name, you can also use any other user belonging to the haclient group to connect to the cluster.

6.1.2 Main Window

After being connected, the main window opens:

Pacemaker GUI - Main Window
Figure 6.2: Pacemaker GUI - Main Window
Note
Note: Available Functions in Pacemaker GUI

By default, users logged in as root or hacluster have full read-write access to all cluster configuration tasks. However, Access Control Lists can be used to define fine-grained access permissions.

If ACLs are enabled in the CRM, the available functions in the Pacemaker GUI depend on the user role and access permission assigned to you.

To view or modify cluster components like the CRM, resources, nodes or constraints, select the respective subentry of the Configuration category in the left pane and use the options that become available in the right pane. Additionally, the Pacemaker GUI lets you easily view, edit, import and export XML fragments of the CIB for the following subitems: Resource Defaults, Operation Defaults, Nodes, Resources, and Constraints. Select any of the Configuration subitems and select Show › XML Mode in the upper right corner of the window.

If you have already configured your resources, click the Management category in the left pane to show the status of your cluster and its resources. This view also allows you to set nodes to standby and to modify the management status of nodes (if they are currently managed by the cluster or not). To access the main functions for resources (starting, stopping, cleaning up or migrating resources), select the resource in the right pane and use the icons in the toolbar. Alternatively, right-click the resource and select the respective menu item from the context menu.

The Pacemaker GUI also allows you to switch between different view modes, influencing the behavior of the software and hiding or showing certain aspects:

Simple Mode

Lets you add resources in a wizard-like mode. When creating and modifying resources, shows the frequently-used tabs for sub-objects, allowing you to directly add objects of that type via the tab.

Allows you to view and change all available global cluster options by selecting the CRM Config entry in the left pane. The right pane then shows the values that are currently set. If no specific value is set for an option, it shows the default values instead.

Expert Mode

Lets you add resources in either a wizard-like mode or via dialog windows. When creating and modifying resources, it only shows the corresponding tab if a particular type of sub-object already exists in CIB. When adding a new sub-object, you will be prompted to select the object type, thus allowing you to add all supported types of sub-objects.

When selecting the CRM Config entry in the left pane, it only shows the values of global cluster options that have been actually set. It hides all cluster options that will automatically use the defaults (because no values have been set). In this mode, the global cluster options can only be modified by using the individual configuration dialogs.

Hack Mode

Has the same functions as the expert mode. Allows you to add additional attribute sets that include specific rules to make your configuration more dynamic. For example, you can make a resource have different instance attributes depending on the node it is hosted on. Furthermore, you can add a time-based rule for a meta attribute set to determine when the attributes take effect.

The window's status bar also shows the currently active mode.

The following sections guide you through the main tasks you need to execute when configuring cluster options and resources and show you how to administer the resources with the Pacemaker GUI. Where not stated otherwise, the step-by-step instructions reflect the procedure as executed in the simple mode.

6.2 Configuring Global Cluster Options

Global cluster options control how the cluster behaves when confronted with certain situations. They are grouped into sets and can be viewed and modified with the cluster management tools like Pacemaker GUI and crm shell. The predefined values can be kept in most cases. However, to make key functions of your cluster work correctly, you need to adjust the following parameters after basic cluster setup:

Procedure 6.1: Modifying Global Cluster Options
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. Select View › Simple Mode.

  3. In the left pane, select CRM Config to view the global cluster options and their current values.

  4. Depending on your cluster requirements, set No Quorum Policy to the appropriate value.

  5. If you need to disable fencing for any reasons, deselect stonith-enabled.

    Important
    Important: No Support Without STONITH

    A cluster without STONITH enabled is not supported.

  6. Confirm your changes with Apply.

You can at any time switch back to the default values for all options by selecting CRM Config in the left pane and clicking Default.

6.3 Configuring Cluster Resources

As a cluster administrator, you need to create cluster resources for every resource or application you run on servers in your cluster. Cluster resources can include Web sites, e-mail servers, databases, file systems, virtual machines, and any other server-based applications or services you want to make available to users at all times.

For an overview of resource types you can create, refer to Section 4.2.3, “Types of Resources”.

6.3.1 Creating Simple Cluster Resources

To create the most basic type of a resource, proceed as follows:

Procedure 6.2: Adding Primitive Resources
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. In the left pane, select Resources and click Add › Primitive.

  3. In the next dialog, set the following parameters for the resource:

    1. Enter a unique ID for the resource.

    2. From the Class list, select the resource agent class you want to use for that resource: lsb, ocf, service, or stonith. For more information, see Section 4.2.2, “Supported Resource Agent Classes”.

    3. If you selected ocf as class, also specify the Provider of your OCF resource agent. The OCF specification allows multiple vendors to supply the same resource agent.

    4. From the Type list, select the resource agent you want to use (for example, IPaddr or Filesystem). A short description for this resource agent is displayed below.

      The selection you get in the Type list depends on the Class (and for OCF resources also on the Provider) you have chosen.

    5. Below Options, set the Initial state of resource.

    6. Activate Add monitor operation if you want the cluster to monitor if the resource is still healthy.

  4. Click Forward. The next window shows a summary of the parameters that you have already defined for that resource. All required Instance Attributes for that resource are listed. You need to edit them in order to set them to appropriate values. You may also need to add more attributes, depending on your deployment and settings. For details how to do so, refer to Procedure 6.3, “Adding or Modifying Meta and Instance Attributes”.

  5. If all parameters are set according to your wishes, click Apply to finish the configuration of that resource. The configuration dialog is closed and the main window shows the newly added resource.

During or after creation of a resource, you can add or modify the following parameters for resources:

Procedure 6.3: Adding or Modifying Meta and Instance Attributes
  1. In the Pacemaker GUI main window, click Resources in the left pane to see the resources already configured for the cluster.

  2. In the right pane, select the resource to modify and click Edit (or double-click the resource). The next window shows the basic resource parameters and the Meta Attributes, Instance Attributes or Operations already defined for that resource.

  3. To add a new meta attribute or instance attribute, select the respective tab and click Add.

  4. Select the Name of the attribute you want to add. A short Description is displayed.

  5. If needed, specify an attribute Value. Otherwise the default value of that attribute will be used.

  6. Click OK to confirm your changes. The newly added or modified attribute appears on the tab.

  7. If all parameters are set according to your wishes, click OK to finish the configuration of that resource. The configuration dialog is closed and the main window shows the modified resource.

Tip
Tip: XML Source Code for Resources

The Pacemaker GUI allows you to view the XML fragments that are generated from the parameters that you have defined. For individual resources, select Show › XML Mode in the top right corner of the resource configuration dialog.

To access the XML representation of all resources that you have configured, click Resources in the left pane and then select Show › XML Mode in the upper right corner of the main window.

The editor displaying the XML code allows you to Import or Export the XML elements or to manually edit the XML code.

6.3.2 Creating STONITH Resources

Important
Important: No Support Without STONITH

A cluster without STONITH running is not supported.

By default, the global cluster option stonith-enabled is set to true: If no STONITH resources have been defined, the cluster will refuse to start any resources. To complete STONITH setup, you need to configure one or more STONITH resources. While they are configured similar to other resources, the behavior of STONITH resources is different in some respects. For details refer to Section 9.3, “STONITH Resources and Configuration”.

Procedure 6.4: Adding a STONITH Resource
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. In the left pane, select Resources and click Add › Primitive.

  3. In the next dialog, set the following parameters for the resource:

    1. Enter a unique ID for the resource.

    2. From the Class list, select the resource agent class stonith.

    3. From the Type list, select the STONITH plug-in for controlling your STONITH device. A short description for this plug-in is displayed below.

    4. Below Options, set the Initial state of resource.

    5. Activate Add monitor operation if you want the cluster to monitor the fencing device. For more information, refer to Section 9.4, “Monitoring Fencing Devices”.

  4. Click Forward. The next window shows a summary of the parameters that you have already defined for that resource. All required Instance Attributes for the selected STONITH plug-in are listed. You need to edit them in order to set them to appropriate values. You may also need to add more attributes or monitor operations, depending on your deployment and settings. For details how to do so, refer to Procedure 6.3, “Adding or Modifying Meta and Instance Attributes” and Section 6.3.8, “Configuring Resource Monitoring”.

  5. If all parameters are set according to your wishes, click Apply to finish the configuration of that resource. The configuration dialog closes and the main window shows the newly added resource.

To complete your fencing configuration, add constraints or use clones or both. For more details, refer to Chapter 9, Fencing and STONITH.

6.3.3 Using Resource Templates

If you want to create several resources with similar configurations, defining a resource template is the easiest way. Once defined, it can be referenced in primitives or in certain types of constraints. For detailed information about function and use of resource templates, refer to Section 4.4.3, “Resource Templates and Constraints”.

  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. In the left pane, select Resources and click Add › Template.

  3. Enter a unique ID for the template.

  4. Specify the resource template as you would specify a primitive. Follow Procedure 6.2: Adding Primitive Resources, starting with Step 3.b.

  5. If all parameters are set according to your wishes, click Apply to finish the configuration of the resource template. The configuration dialog is closed and the main window shows the newly added resource template.

Procedure 6.5: Referencing Resource Templates
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. To reference the newly created resource template in a primitive, follow these steps:

    1. In the left pane, select Resources and click Add › Primitive.

    2. Enter a unique ID and specify Class, Provider, and Type.

    3. Select the Template to reference.

    4. If you want to set specific instance attributes, operations or meta attributes that deviate from the template, continue to configure the resource as described in Procedure 6.2, “Adding Primitive Resources”.

  3. To reference the newly created resource template in colocational or order constraints:

    1. Configure the constraints as described in Procedure 6.7, “Adding or Modifying Colocational Constraints” or Procedure 6.8, “Adding or Modifying Ordering Constraints”, respectively.

    2. For colocation constraints, the Resources drop-down list will show the IDs of all resources and resource templates that have been configured. From there, select the template to reference.

    3. Likewise, for ordering constraints, the First and Then drop-down lists will show both resources and resource templates. From there, select the template to reference.

6.3.4 Configuring Resource Constraints

Having all the resources configured is only part of the job. Even if the cluster knows all needed resources, it might still not be able to handle them correctly. Resource constraints let you specify which cluster nodes resources can run on, what order resources will load, and what other resources a specific resource is dependent on.

For an overview which types of constraints are available, refer to Section 4.4.1, “Types of Constraints”. When defining constraints, you also need to specify scores. For more information about scores and their implications in the cluster, see Section 4.4.2, “Scores and Infinity”.

Learn how to create the different types of the constraints in the following procedures.

Procedure 6.6: Adding or Modifying Locational Constraints
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. In the Pacemaker GUI main window, click Constraints in the left pane to see the constraints already configured for the cluster.

  3. In the left pane, select Constraints and click Add.

  4. Select Resource Location and click OK.

  5. Enter a unique ID for the constraint. When modifying existing constraints, the ID is already defined and is displayed in the configuration dialog.

  6. Select the Resource for which to define the constraint. The list shows the IDs of all resources that have been configured for the cluster.

  7. Set the Score for the constraint. Positive values indicate the resource can run on the Node you specify below. Negative values mean it should not run on this node. Setting the score to INFINITY forces the resource to run on the node. Setting it to -INFINITY means the resources must not run on the node.

  8. Select the Node for the constraint.

  9. If you leave the Node and the Score field empty, you can also add rules by clicking Add › Rule. To add a lifetime, just click Add › Lifetime.

  10. If all parameters are set according to your wishes, click OK to finish the configuration of the constraint. The configuration dialog is closed and the main window shows the newly added or modified constraint.

Procedure 6.7: Adding or Modifying Colocational Constraints
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. In the Pacemaker GUI main window, click Constraints in the left pane to see the constraints already configured for the cluster.

  3. In the left pane, select Constraints and click Add.

  4. Select Resource Colocation and click OK.

  5. Enter a unique ID for the constraint. When modifying existing constraints, the ID is already defined and is displayed in the configuration dialog.

  6. Select the Resource which is the colocation source. The list shows the IDs of all resources and resource templates that have been configured for the cluster.

    If the constraint cannot be satisfied, the cluster may decide not to allow the resource to run at all.

  7. If you leave both the Resource and the With Resource field empty, you can also add a resource set by clicking Add › Resource Set. To add a lifetime, just click Add › Lifetime.

  8. In With Resource, define the colocation target. The cluster will decide where to put this resource first and then decide where to put the resource in the Resource field.

  9. Define a Score to determine the location relationship between both resources. Positive values indicate the resources should run on the same node. Negative values indicate the resources should not run on the same node. Setting the score to INFINITY forces the resources to run on the same node. Setting it to -INFINITY means the resources must not run on the same node. The score will be combined with other factors to decide where to put the resource.

  10. If needed, specify further parameters, like Resource Role.

    Depending on the parameters and options you choose, a short Description explains the effect of the colocational constraint you are configuring.

  11. If all parameters are set according to your wishes, click OK to finish the configuration of the constraint. The configuration dialog is closed and the main window shows the newly added or modified constraint.

Procedure 6.8: Adding or Modifying Ordering Constraints
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. In the Pacemaker GUI main window, click Constraints in the left pane to see the constraints already configured for the cluster.

  3. In the left pane, select Constraints and click Add.

  4. Select Resource Order and click OK.

  5. Enter a unique ID for the constraint. When modifying existing constraints, the ID is already defined and is displayed in the configuration dialog.

  6. With First, define the resource that must be started before the resource specified with Then is allowed to.

  7. With Then define the resource that will start after the First resource.

    Depending on the parameters and options you choose, a short Description explains the effect of the ordering constraint you are configuring.

  8. If needed, define further parameters, for example:

    1. Specify a Score. If greater than zero, the constraint is mandatory, otherwise it is only a suggestion. The default value is INFINITY.

    2. Specify a value for Symmetrical. If true, the resources are stopped in the reverse order. The default value is true.

  9. If all parameters are set according to your wishes, click OK to finish the configuration of the constraint. The configuration dialog is closed and the main window shows the newly added or modified constraint.

You can access and modify all constraints that you have configured in the Constraints view of the Pacemaker GUI.

Pacemaker GUI - Constraints
Figure 6.3: Pacemaker GUI - Constraints

6.3.5 Specifying Resource Failover Nodes

A resource will be automatically restarted if it fails. If that cannot be achieved on the current node, or it fails N times on the current node, it will try to fail over to another node. You can define a number of failures for resources (a migration-threshold), after which they will migrate to a new node. If you have more than two nodes in your cluster, the node a particular resource fails over to is chosen by the High Availability software.

However, you can specify the node a resource will fail over to by proceeding as follows:

  1. Configure a location constraint for that resource as described in Procedure 6.6, “Adding or Modifying Locational Constraints”.

  2. Add the migration-threshold meta attribute to that resource as described in Procedure 6.3, “Adding or Modifying Meta and Instance Attributes” and enter a Value for the migration-threshold. The value should be positive and less that INFINITY.

  3. If you want to automatically expire the failcount for a resource, add the failure-timeout meta attribute to that resource as described in Procedure 6.3, “Adding or Modifying Meta and Instance Attributes” and enter a Value for the failure-timeout.

  4. If you want to specify additional failover nodes with preferences for a resource, create additional location constraints.

For an example of the process flow in the cluster regarding migration thresholds and failcounts, see Example 4.6, “Migration Threshold—Process Flow”.

Instead of letting the failcount for a resource expire automatically, you can also clean up failcounts for a resource manually at any time. Refer to Section 6.4.2, “Cleaning Up Resources” for the details.

6.3.6 Specifying Resource Failback Nodes (Resource Stickiness)

A resource might fail back to its original node when that node is back online and in the cluster. If you want to prevent a resource from failing back to the node it was running on prior to failover, or if you want to specify a different node for the resource to fail back to, you must change its resource stickiness value. You can either specify resource stickiness when you are creating a resource, or afterwards.

For the implications of different resource stickiness values, refer to Section 4.4.5, “Failback Nodes”.

Procedure 6.9: Specifying Resource Stickiness
  1. Add the resource-stickiness meta attribute to the resource as described in Procedure 6.3, “Adding or Modifying Meta and Instance Attributes”.

  2. As Value for the resource-stickiness, specify a value between -INFINITY and INFINITY.

6.3.7 Configuring Placement of Resources Based on Load Impact

Not all resources are equal. Some, such as Xen guests, require that the node hosting them meets their capacity requirements. If resources are placed such that their combined need exceed the provided capacity, the resources diminish in performance (or even fail).

To take this into account, the High Availability Extension allows you to specify the following parameters:

  1. The capacity a certain node provides.

  2. The capacity a certain resource requires.

  3. An overall strategy for placement of resources.

Utilization attributes are used to configure both the resource's requirements and the capacity a node provides. The High Availability Extension now also provides means to detect and configure both node capacity and resource requirements automatically. For more details and a configuration example, refer to Section 4.4.6, “Placing Resources Based on Their Load Impact”.

To manually configure the resource's requirements and the capacity a node provides, proceed as described in Procedure 6.10, “Adding Or Modifying Utilization Attributes”. You can name the utilization attributes according to your preferences and define as many name/value pairs as your configuration needs.

Procedure 6.10: Adding Or Modifying Utilization Attributes

In the following example, we assume that you already have a basic configuration of cluster nodes and resources and now additionally want to configure the capacities a certain node provides and the capacity a certain resource requires. The procedure of adding utilization attributes is basically the same and only differs in Step 2 and Step 3.

  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. To specify the capacity a node provides:

    1. In the left pane, click Node.

    2. In the right pane, select the node whose capacity you want to configure and click Edit.

  3. To specify the capacity a resource requires:

    1. In the left pane, click Resources.

    2. In the right pane, select the resource whose capacity you want to configure and click Edit.

  4. Select the Utilization tab and click Add to add an utilization attribute.

  5. Enter a Name for the new attribute. You can name the utilization attributes according to your preferences.

  6. Enter a Value for the attribute and click OK. The attribute value must be an integer.

  7. If you need more utilization attributes, repeat Step 5 to Step 6.

    The Utilization tab shows a summary of the utilization attributes that you have already defined for that node or resource.

  8. If all parameters are set according to your wishes, click OK to close the configuration dialog.

Figure 6.4, “Example Configuration for Node Capacity” shows the configuration of a node which would provide 8 CPU units and 16 GB of memory to resources running on that node:

Example Configuration for Node Capacity
Figure 6.4: Example Configuration for Node Capacity

An example configuration for a resource requiring 4096 memory units and 4 of the CPU units of a node would look as follows:

Example Configuration for Resource Capacity
Figure 6.5: Example Configuration for Resource Capacity

After (manual or automatic) configuration of the capacities your nodes provide and the capacities your resources require, you need to set the placement strategy in the global cluster options, otherwise the capacity configurations have no effect. Several strategies are available to schedule the load: for example, you can concentrate it on as few nodes as possible, or balance it evenly over all available nodes. For more information, refer to Section 4.4.6, “Placing Resources Based on Their Load Impact”.

Procedure 6.11: Setting the Placement Strategy
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. Select View › Simple Mode.

  3. In the left pane, select CRM Config to view the global cluster options and their current values.

  4. Depending on your requirements, set Placement Strategy to the appropriate value.

  5. If you need to disable fencing for any reasons, deselect Stonith Enabled.

  6. Confirm your changes with Apply.

6.3.8 Configuring Resource Monitoring

Although the High Availability Extension can detect a node failure, it also has the ability to detect when an individual resource on a node has failed. If you want to ensure that a resource is running, you must configure resource monitoring for it. Resource monitoring consists of specifying a timeout and/or start delay value, and an interval. The interval tells the CRM how often it should check the resource status. You can also set particular parameters, such as Timeout for start or stop operations.

Procedure 6.12: Adding or Modifying Monitor Operations
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. In the Pacemaker GUI main window, click Resources in the left pane to see the resources already configured for the cluster.

  3. In the right pane, select the resource to modify and click Edit. The next window shows the basic resource parameters and the meta attributes, instance attributes and operations already defined for that resource.

  4. To add a new monitor operation, select the respective tab and click Add.

    To modify an existing operation, select the respective entry and click Edit.

  5. In Name, select the action to perform, for example monitor, start, or stop.

    The parameters shown below depend on the selection you make here.

  6. In the Timeout field, enter a value in seconds. After the specified timeout period, the operation will be treated as failed. The PE will decide what to do or execute what you specified in the On Fail field of the monitor operation.

  7. If needed, expand the Optional section and add parameters, like On Fail (what to do if this action ever fails?) or Requires (what conditions need to be satisfied before this action occurs?).

  8. If all parameters are set according to your wishes, click OK to finish the configuration of that resource. The configuration dialog is closed and the main window shows the modified resource.

For the processes which take place if the resource monitor detects a failure, refer to Section 4.3, “Resource Monitoring”.

To view resource failures in the Pacemaker GUI, click Management in the left pane, then select the resource whose details you want to see in the right pane. For a resource that has failed, the Fail Count and last failure of the resource is shown in the middle of the right pane (below the Migration threshold entry).

Viewing a Resource's Failcount
Figure 6.6: Viewing a Resource's Failcount

6.3.9 Configuring a Cluster Resource Group

Some cluster resources are dependent on other components or resources, and require that each component or resource starts in a specific order and runs together on the same server. To simplify this configuration we support the concept of groups.

For an example of a resource group and more information about groups and their properties, refer to Section 4.2.5.1, “Groups”.

Note
Note: Empty Groups

Groups must contain at least one resource, otherwise the configuration is not valid.

Procedure 6.13: Adding a Resource Group
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. In the left pane, select Resources and click Add › Group.

  3. Enter a unique ID for the group.

  4. Below Options, set the Initial state of resource and click Forward.

  5. In the next step, you can add primitives as sub-resources for the group. These are created similar as described in Procedure 6.2, “Adding Primitive Resources”.

  6. If all parameters are set according to your wishes, click Apply to finish the configuration of the primitive.

  7. In the next window, you can continue adding sub-resources for the group by choosing Primitive again and clicking OK.

    When you do not want to add more primitives to the group, click Cancel instead. The next window shows a summary of the parameters that you have already defined for that group. The Meta Attributes and Primitives of the group are listed. The position of the resources in the Primitive tab represents the order in which the resources are started in the cluster.

  8. As the order of resources in a group is important, use the Up and Down buttons to sort the Primitives in the group.

  9. If all parameters are set according to your wishes, click OK to finish the configuration of that group. The configuration dialog is closed and the main window shows the newly created or modified group.

Pacemaker GUI - Groups
Figure 6.7: Pacemaker GUI - Groups

Let us assume you already have created a resource group as explained in Procedure 6.13, “Adding a Resource Group”. The following procedure shows you how to modify the group to match Example 4.1, “Resource Group for a Web Server”.

Procedure 6.14: Adding Resources to an Existing Group
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. In the left pane, switch to the Resources view and in the right pane, select the group to modify and click Edit. The next window shows the basic group parameters and the meta attributes and primitives already defined for that resource.

  3. Click the Primitives tab and click Add.

  4. In the next dialog, set the following parameters to add an IP address as sub-resource of the group:

    1. Enter a unique ID (for example, my_ipaddress).

    2. From the Class list, select ocf as resource agent class.

    3. As Provider of your OCF resource agent, select heartbeat.

    4. From the Type list, select IPaddr as resource agent.

    5. Click Forward.

    6. In the Instance Attribute tab, select the IP entry and click Edit (or double-click the IP entry).

    7. As Value, enter the desired IP address, for example, 192.168.1.180.

    8. Click OK and Apply. The group configuration dialog shows the newly added primitive.

  5. Add the next sub-resources (file system and Web server) by clicking Add again.

  6. Set the respective parameters for each of the sub-resources similar to steps Step 4.a to Step 4.h, until you have configured all sub-resources for the group.

    As we configured the sub-resources already in the order in that they need to be started in the cluster, the order on the Primitives tab is already correct.

  7. In case you need to change the resource order for a group, use the Up and Down buttons to sort the resources on the Primitive tab.

  8. To remove a resource from the group, select the resource on the Primitives tab and click Remove.

  9. Click OK to finish the configuration of that group. The configuration dialog is closed and the main window shows the modified group.

6.3.10 Configuring a Clone Resource

You may want certain resources to run simultaneously on multiple nodes in your cluster. To do this you must configure a resource as a clone. Examples of resources that might be configured as clones include STONITH and cluster file systems like OCFS2. You can clone any resource provided. This is supported by the resource’s Resource Agent. Clone resources may even be configured differently depending on which nodes they are hosted.

For an overview which types of resource clones are available, refer to Section 4.2.5.2, “Clones”.

Procedure 6.15: Adding or Modifying Clones
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. In the left pane, select Resources and click Add › Clone.

  3. Enter a unique ID for the clone.

  4. Below Options, set the Initial state of resource.

  5. Activate the respective options you want to set for your clone and click Forward.

  6. In the next step, you can either add a Primitive or a Group as sub-resources for the clone. These are created similar as described in Procedure 6.2, “Adding Primitive Resources” or Procedure 6.13, “Adding a Resource Group”.

  7. If all parameters in the clone configuration dialog are set according to your wishes, click Apply to finish the configuration of the clone.

6.4 Managing Cluster Resources

Apart from the possibility to configure your cluster resources, the Pacemaker GUI also allows you to manage existing resources. To switch to a management view and to access the available options, click Management in the left pane.

Pacemaker GUI - Management
Figure 6.8: Pacemaker GUI - Management

6.4.1 Starting Resources

Before you start a cluster resource, make sure it is set up correctly. For example, if you want to use an Apache server as a cluster resource, set up the Apache server first and complete the Apache configuration before starting the respective resource in your cluster.

Note
Note: Do Not Touch Services Managed by the Cluster

When managing a resource with the High Availability Extension, the same resource must not be started or stopped otherwise (outside of the cluster, for example manually or on boot or reboot). The High Availability Extension software is responsible for all service start or stop actions.

However, if you want to check if the service is configured properly, start it manually, but make sure that it is stopped again before High Availability takes over.

For interventions in resources that are currently managed by the cluster, set the resource to unmanaged mode first as described in Section 6.4.5, “Changing Management Mode of Resources”.

During creation of a resource with the Pacemaker GUI, you can set the resource's initial state with the target-role meta attribute. If its value has been set to stopped, the resource does not start automatically after being created.

Procedure 6.16: Starting A New Resource
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. Click Management in the left pane.

  3. In the right pane, right-click the resource and select Start from the context menu (or use the Start Resource icon in the toolbar).

6.4.2 Cleaning Up Resources

A resource will be automatically restarted if it fails, but each failure raises the resource's failcount. View a resource's failcount with the Pacemaker GUI by clicking Management in the left pane, then selecting the resource in the right pane. If a resource has failed, its Fail Count is shown in the middle of the right pane (below the Migration Threshold entry).

If a migration-threshold has been set for that resource, the node will no longer be allowed to run the resource as soon as the number of failures has reached the migration threshold.

A resource's failcount can either be reset automatically (by setting a failure-timeout option for the resource) or you can reset it manually as described below.

Procedure 6.17: Cleaning Up A Resource
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. Click Management in the left pane.

  3. In the right pane, right-click the respective resource and select Cleanup Resource from the context menu (or use the Cleanup Resource icon in the toolbar).

    This executes the commands crm_resource  -C and crm_failcount  -D for the specified resource on the specified node.

For more information, see the man pages of crm_resource and crm_failcount.

6.4.3 Removing Cluster Resources

If you need to remove a resource from the cluster, follow the procedure below to avoid configuration errors:

Note
Note: Removing Referenced Resources

Cluster resources cannot be removed if their ID is referenced by any constraint. If you cannot delete a resource, check where the resource ID is referenced and remove the resource from the constraint first.

Procedure 6.18: Removing a Cluster Resource
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. Click Management in the left pane.

  3. Select the respective resource in the right pane.

  4. Clean up the resource on all nodes as described in Procedure 6.17, “Cleaning Up A Resource”.

  5. Stop the resource.

  6. Remove all constraints that relate to the resource, otherwise removing the resource will not be possible.

6.4.4 Migrating Cluster Resources

As mentioned in Section 6.3.5, “Specifying Resource Failover Nodes”, the cluster will fail over (migrate) resources automatically in case of software or hardware failures—according to certain parameters you can define (for example, migration threshold or resource stickiness). Apart from that, you can also manually migrate a resource to another node in the cluster.

Procedure 6.19: Manually Migrating a Resource
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. Click Management in the left pane.

  3. Right-click the respective resource in the right pane and select Migrate Resource.

  4. In the new window, select the node to which to move the resource to in To Node. This creates a location constraint with an INFINITY score for the destination node.

  5. If you want to migrate the resource only temporarily, activate Duration and enter the time frame for which the resource should migrate to the new node. After the expiration of the duration, the resource can move back to its original location or it may stay where it is (depending on resource stickiness).

  6. In cases where the resource cannot be migrated (if the resource's stickiness and constraint scores total more than INFINITY on the current node), activate the Force option. This forces the resource to move by creating a rule for the current location and a score of -INFINITY.

    Note
    Note

    This prevents the resource from running on this node until the constraint is removed with Clear Migrate Constraints or the duration expires.

  7. Click OK to confirm the migration.

To allow a resource to move back again, proceed as follows:

Procedure 6.20: Clearing a Migration Constraint
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. Click Management in the left pane.

  3. Right-click the respective resource in the right pane and select Clear Migrate Constraints.

    This uses the crm_resource  -U command. The resource can move back to its original location or it may stay where it is (depending on resource stickiness).

For more information, see the crm_resource man page or Pacemaker Explained, available from http://www.clusterlabs.org/doc/. Refer to section Resource Migration.

6.4.5 Changing Management Mode of Resources

When a resource is being managed by the cluster, it must not be touched otherwise (outside of the cluster). For maintenance of individual resources, you can set the respective resources to an unmanaged mode that allows you to modify the resource outside of the cluster.

Procedure 6.21: Changing Management Mode of Resources
  1. Start the Pacemaker GUI and log in to the cluster as described in Section 6.1.1, “Logging in to a Cluster”.

  2. Click Management in the left pane.

  3. Right-click the respective resource in the right pane and from the context menu, select Unmanage Resource.

  4. After you have finished the maintenance task for that resource, right-click the respective resource again in the right pane and select Manage Resource.

    From this point on, the resource will be managed by the High Availability Extension software again.