Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE OpenStack Cloud Crowbar 9 Documentation / Deployment Guide using Crowbar / Setting Up OpenStack Nodes and Services / Installing the OpenStack Nodes
Applies to SUSE OpenStack Cloud Crowbar 9

11 Installing the OpenStack Nodes

The OpenStack nodes represent the actual cloud infrastructure. Node installation and service deployment is done automatically from the Administration Server. Before deploying the OpenStack services, SUSE Linux Enterprise Server 12 SP4 will be installed on all Control Nodes and Storage Nodes.

To prepare the installation, each node needs to be booted using PXE, which is provided by the tftp server from the Administration Server. Afterward you can allocate the nodes and trigger the operating system installation.

11.1 Preparations

Meaningful Node Names

Make a note of the MAC address and the purpose of each node (for example, controller, block storage, object storage, compute). This will make deploying the OpenStack components a lot easier and less error-prone. It also enables you to assign meaningful names (aliases) to the nodes, which are otherwise listed with the MAC address by default.

BIOS Boot Settings

Make sure booting using PXE (booting from the network) is enabled and configured as the primary boot-option for each node. The nodes will boot twice from the network during the allocation and installation phase. Booting from the first hard disk needs to be configured as the second boot option.

Custom Node Configuration

All nodes are installed using AutoYaST with the same configuration located at /opt/dell/chef/cookbooks/provisioner/templates/default/autoyast.xml.erb. If this configuration does not match your needs (for example if you need special third party drivers) you need to make adjustments to this file. See the https://documentation.suse.com/sles/15-SP1/single-html/SLES-autoyast/#book-autoyast for details. If you change the AutoYaST configuration file, you need to re-upload it to Chef using the following command:

knife cookbook upload -o /opt/dell/chef/cookbooks/ provisioner
Direct root Login

By default, the root account on the nodes has no password assigned, so a direct root login is not possible. Logging in on the nodes as root is only possible via SSH public keys (for example, from the Administration Server).

If you want to allow direct root login, you can set a password via the Crowbar Provisioner barclamp before deploying the nodes. That password will be used for the root account on all OpenStack nodes. Using this method after the nodes are deployed is not possible. In that case you would need to log in to each node via SSH from the Administration Server and change the password manually with passwd.

Setting a root Password for the OpenStack Nodes
  1. Create an md5-hashed root-password, for example by using openssl passwd -1.

  2. Open a browser and point it to the Crowbar Web interface on the Administration Server, for example http://192.168.124.10. Log in as user crowbar. The password is crowbar by default, if you have not changed it during the installation.

  3. Open the barclamp menu by clicking Barclamps › Crowbar. Click the Provisioner barclamp entry and Edit the Default proposal.

  4. Click Raw in the Attributes section to edit the configuration file.

  5. Add the following line to the end of the file before the last closing curly bracket:

    , "root_password_hash": "HASHED_PASSWORD"

    replacing "HASHED_PASSWORD" with the password you generated in the first step.

  6. Click Apply.

11.2 Node Installation

To install a node, you need to boot it first using PXE. It will be booted with an image that enables the Administration Server to discover the node and make it available for installation. When you have allocated the node, it will boot using PXE again and the automatic installation will start.

  1. Boot all nodes that you want to deploy using PXE. The nodes will boot into the SLEShammer image, which performs the initial hardware discovery.

    Important
    Important: Limit the Number of Concurrent Boots using PXE

    Booting many nodes at the same time using PXE will cause heavy load on the TFTP server, because all nodes will request the boot image at the same time. We recommend booting the nodes at different intervals.

  2. Open a browser and point it to the Crowbar Web interface on the Administration Server, for example http://192.168.124.10/. Log in as user crowbar. The password is crowbar by default, if you have not changed it.

    Click Nodes › Dashboard to open the Node Dashboard.

  3. Each node that has successfully booted will be listed as being in state Discovered, indicated by a yellow bullet. The nodes will be listed with their MAC address as a name. Wait until all nodes are listed as Discovered before proceeding. If a node does not report as Discovered, it may need to be rebooted manually.

    Discovered Nodes
    Figure 11.1: Discovered Nodes
  4. Although this step is optional, we recommend properly grouping your nodes at this stage, since it lets you clearly arrange all nodes. Grouping the nodes by role would be one option, for example control, compute and object storage (swift).

    1. Enter the name of a new group into the New Group text box and click Add Group.

    2. Drag and drop a node onto the title of the newly created group. Repeat this step for each node you want to put into the group.

      Grouping Nodes
      Figure 11.2: Grouping Nodes
  5. To allocate all nodes, click Nodes › Bulk Edit. To allocate a single node, click the name of a node, then click Edit.

    Editing a Single Node
    Figure 11.3: Editing a Single Node
    Important
    Important: Limit the Number of Concurrent Node Deployments

    Deploying many nodes in bulk mode will cause heavy load on the Administration Server. The subsequent concurrent Chef client runs triggered by the nodes will require a lot of RAM on the Administration Server.

    Therefore it is recommended to limit the number of concurrent Allocations in bulk mode. The maximum number depends on the amount of RAM on the Administration Server—limiting concurrent deployments to five up to ten is recommended.

  6. In single node editing mode, you can also specify the Filesystem Type for the node. By default, it is set to ext4 for all nodes. We recommended using the default.

  7. Provide a meaningful Alias, Public Name, and a Description for each node, and then check the Allocate box. You can also specify the Intended Role for the node. This optional setting is used to make reasonable proposals for the barclamps.

    By default the Target Platform is set to SLES 12 SP2.

    Tip
    Tip: Alias Names

    Providing an alias name will change the default node names (MAC address) to the name you provided, making it easier to identify the node. Furthermore, this alias will also be used as a DNS CNAME for the node in the admin network. As a result, you can access the node via this alias when, for example, logging in via SSH.

    Tip
    Tip: Public Names

    A node's Alias Name is resolved by the DNS server installed on the Administration Server and therefore only available within the cloud network. The OpenStack Dashboard or some APIs (keystone-server, glance-server, cinder-controller, neutron-server, nova-controller, and swift-proxy) can be accessed from outside the SUSE OpenStack Cloud network. To be able to access them by name, these names need to be resolved by a name server placed outside of the SUSE OpenStack Cloud network. If you have created DNS entries for nodes, specify the name in the Public Name field.

    The Public Name is never used within the SUSE OpenStack Cloud network. However, if you create an SSL certificate for a node that has a public name, this name must be added as an AlternativeName to the certificate. See Section 2.3, “SSL Encryption” for more information.

    Bulk Editing Nodes
    Figure 11.4: Bulk Editing Nodes
  8. When you have filled in the data for all nodes, click Save. The nodes will reboot and commence the AutoYaST-based SUSE Linux Enterprise Server installation (or installation of other target platforms, if selected) via a second boot using PXE. Click Nodes › Dashboard to return to the Node Dashboard.

  9. Nodes that are being installed are listed with the status Installing (yellow/green bullet). When the installation of a node has finished, it is listed as being Ready, indicated by a green bullet. Wait until all nodes are listed as Ready before proceeding.

    All Nodes Have Been Installed
    Figure 11.5: All Nodes Have Been Installed

11.3 Converting Existing SUSE Linux Enterprise Server 12 SP4 Machines Into SUSE OpenStack Cloud Nodes

SUSE OpenStack Cloud allows adding existing machines installed with SUSE Linux Enterprise Server 12 SP4 to the pool of nodes. This enables you to use spare machines for SUSE OpenStack Cloud, and offers an alternative way of provisioning and installing nodes (via SUSE Manager for example). The machine must run SUSE Linux Enterprise Server 12 SP4.

The machine also needs to be on the same network as the Administration Server, because it needs to communicate with this server. Since the Administration Server provides a DHCP server, we recommend configuring this machine to get its network assignments from DHCP. If it has a static IP address, make sure it is not already used in the admin network. Check the list of used IP addresses with the YaST Crowbar module as described in Section 7.2, “Networks.

Proceed as follows to convert an existing SUSE Linux Enterprise Server 12 SP4 machine into a SUSE OpenStack Cloud node:

  1. Download the crowbar_register script from the Administration Server at http://192.168.124.10:8091/suse-12.4/x86_64/crowbar_register. Replace the IP address with the IP address of your Administration Server using curl or wget. Note that the download only works from within the admin network.

  2. Make the crowbar_register script executable (chmod a+x crowbar_register).

  3. Run the crowbar_register script. If you have multiple network interfaces, the script tries to automatically detect the one that is connected to the admin network. You may also explicitly specify which network interface to use by using the --interface switch, for example crowbar_register --interface eth1.

  4. After the script has successfully run, the machine has been added to the pool of nodes in the SUSE OpenStack Cloud and can be used as any other node from the pool.

11.4 Post-Installation Configuration

The following lists some optional configuration steps like configuring node updates, monitoring, access, and enabling SSL. You may entirely skip the following steps or perform any of them at a later stage.

11.4.1 Deploying Node Updates with the Updater Barclamp

To keep the operating system and the SUSE OpenStack Cloud software itself up-to-date on the nodes, you can deploy either the Updater barclamp or the SUSE Manager barclamp. The latter requires access to a SUSE Manager server. The Updater barclamp uses Zypper to install updates and patches from repositories made available on the Administration Server.

The easiest way to provide the required repositories on the Administration Server is to set up an SMT server as described in Chapter 4, Installing and Setting Up an SMT Server on the Administration Server (Optional). Alternatives to setting up an SMT server are described in Chapter 5, Software Repository Setup.

The Updater barclamp lets you deploy updates that are available on the update repositories at the moment of deployment. Each time you deploy updates with this barclamp you can choose a different set of nodes to which the updates are deployed. This lets you exactly control where and when updates are deployed.

To deploy the Updater barclamp, proceed as follows. For general instructions on how to edit barclamp proposals refer to Section 10.3, “Deploying Barclamp Proposals”.

  1. Open a browser and point it to the Crowbar Web interface on the Administration Server, for example http://192.168.124.10/. Log in as user crowbar. The password is crowbar by default, if you have not changed it during the installation.

  2. Open the barclamp menu by clicking Barclamps › Crowbar. Click the Updater barclamp entry and Create to open the proposal.

  3. Configure the barclamp by the following attributes. This configuration always applies to all nodes on which the barclamp is deployed. Individual configurations for certain nodes are only supported by creating a separate proposal.

    Use zypper

    Define which Zypper subcommand to use for updating. patch will install all patches applying to the system from the configured update repositories that are available. update will update packages from all configured repositories (not just the update repositories) that have a higher version number than the installed packages. dist-upgrade replaces each package installed with the version from the repository and deletes packages not available in the repositories.

    We recommend using patch.

    Enable GPG Checks

    If set to true (recommended), checks if packages are correctly signed.

    Automatically Agree With Licenses

    If set to true (recommended), Zypper automatically accepts third party licenses.

    Include Patches that need Reboots (Kernel)

    Installs patches that require a reboot (for example Kernel or glibc updates). Only set this option to true when you can safely reboot the affected nodes. Refer to Section 1.1, “Keeping the Nodes Up-To-Date” for more information. Installing a new Kernel and not rebooting may result in an unstable system.

    Reboot Nodes if Needed

    Automatically reboots the system in case a patch requiring a reboot has been installed. Only set this option to true when you can safely reboot the affected nodes. Refer to Section 1.1, “Keeping the Nodes Up-To-Date” for more information.

    SUSE Updater barclamp: Configuration
    Figure 11.6: SUSE Updater barclamp: Configuration
  4. Choose the nodes on which the Updater barclamp should be deployed in the Node Deployment section by dragging them to the Updater column.

    SUSE Updater barclamp: Node Deployment
    Figure 11.7: SUSE Updater barclamp: Node Deployment

zypper keeps track of the packages and patches it installs in /var/log/zypp/history. Review that log file on a node to find out which updates have been installed. A second log file recording debug information on the zypper runs can be found at /var/log/zypper.log on each node.

Warning
Warning: Updating Software Packages on Cluster Nodes

Before starting an update for a cluster node, either stop the cluster stack on that node or put the cluster into maintenance mode. If the cluster resource manager on a node is active during the software update, this can lead to unpredictable results like fencing of active nodes. For detailed instructions refer to https://documentation.suse.com/sle-ha/15-SP1/single-html/SLE-HA-guide/#sec-ha-clvm-migrate.

11.4.2 Configuring Node Updates with the SUSE Manager Client Barclamp

To keep the operating system and the SUSE OpenStack Cloud software itself up-to-date on the nodes, you can deploy either SUSE Manager Client barclamp or the Updater barclamp. The latter uses Zypper to install updates and patches from repositories made available on the Administration Server.

To enable the SUSE Manager server to manage the SUSE OpenStack Cloud nodes, you must make the respective SUSE OpenStack Cloud Crowbar 9 channels, the SUSE Linux Enterprise Server 12 SP4 channels, and the channels for extensions used with your deployment (High Availability Extension, SUSE Enterprise Storage) available via an activation key.

The SUSE Manager Client barclamp requires access to the SUSE Manager server from every node it is deployed to.

To deploy the SUSE Manager Client barclamp, proceed as follows. For general instructions on how to edit barclamp proposals refer to Section 10.3, “Deploying Barclamp Proposals”.

  1. Download the package rhn-org-trusted-ssl-cert-VERSION-RELEASE.noarch.rpm from https://susemanager.example.com/pub/. VERSION and RELEASE may vary, ask the administrator of the SUSE Manager for the correct values. susemanager.example.com needs to be replaced by the address of your SUSE Manager server. Copy the file you downloaded to /opt/dell/chef/cookbooks/suse-manager-client/files/default/ssl-cert.rpm on the Administration Server. The package contains the SUSE Manager's CA SSL Public Certificate. The certificate installation has not been automated on purpose, because downloading the certificate manually enables you to check it before copying it.

  2. Re-install the barclamp by running the following command:

    /opt/dell/bin/barclamp_install.rb --rpm core
  3. Open a browser and point it to the Crowbar Web interface on the Administration Server, for example http://192.168.124.10/. Log in as user crowbar. The password is crowbar by default, if you have not changed it during the installation.

  4. Open the barclamp menu by clicking Barclamps › Crowbar. Click the SUSE Manager Client barclamp entry and Create to open the proposal.

  5. Specify the URL of the script for activation of the clients in the URL of the bootstrap script field.

  6. Choose the nodes on which the SUSE Manager barclamp should be deployed in the Deployment section by dragging them to the suse-manager-client column. We recommend deploying it on all nodes in the SUSE OpenStack Cloud.

    SUSE Manager barclamp
    Figure 11.8: SUSE Manager barclamp
Warning
Warning: Updating Software Packages on Cluster Nodes

Before starting an update for a cluster node, either stop the cluster stack on that node or put the cluster into maintenance mode. If the cluster resource manager on a node is active during the software update, this can lead to unpredictable results like fencing of active nodes. For detailed instructions refer to https://documentation.suse.com/sle-ha/15-SP1/single-html/SLE-HA-guide/#sec-ha-clvm-migrate.

11.4.3 Mounting NFS Shares on a Node

The NFS barclamp allows you to mount NFS share from a remote host on nodes in the cloud. This feature can, for example, be used to provide an image repository for glance. Note that all nodes which are to mount an NFS share must be able to reach the NFS server. This requires manually adjusting the network configuration.

To deploy the NFS barclamp, proceed as follows. For general instructions on how to edit barclamp proposals refer to Section 10.3, “Deploying Barclamp Proposals”.

  1. Open a browser and point it to the Crowbar Web interface on the Administration Server, for example http://192.168.124.10/. Log in as user crowbar. The password is crowbar by default, if you have not changed it during the installation.

  2. Open the barclamp menu by clicking Barclamps › Crowbar. Click the NFS Client barclamp entry and Create to open the proposal.

  3. Configure the barclamp by the following attributes. Each set of attributes is used to mount a single NFS share.

    Name

    Unique name for the current configuration. This name is used in the Web interface only to distinguish between different shares.

    NFS Server

    Fully qualified host name or IP address of the NFS server.

    Export

    Export name for the share on the NFS server.

    Path

    Mount point on the target machine.

    Mount Options

    Mount options that will be used on the node. See man 8 mount for general mount options and man 5 nfs for a list of NFS-specific options. Note that the general option nofail (do not report errors if device does not exist) is automatically set.

  4. After having filled in all attributes, click Add. If you want to mount more than one share, fill in the data for another NFS mount. Otherwise click Save to save the data, or Apply to deploy the proposal. Note that you must always click Add before saving or applying the barclamp, otherwise the data that was entered will be lost.

    NFS barclamp
    Figure 11.9: NFS barclamp
  5. Go to the Node Deployment section and drag and drop all nodes, on which the NFS shares defined above should be mounted, to the nfs-client column. Click Apply to deploy the proposal.

    The NFS barclamp is the only barclamp that lets you create different proposals, enabling you to mount different NFS shares on different nodes. When you have created an NFS proposal, a special Edit is shown in the barclamp overview of the Crowbar Web interface. Click it to either Edit an existing proposal or Create a new one. New proposals must have unique names.

    Editing an NFS barclamp Proposal
    Figure 11.10: Editing an NFS barclamp Proposal

11.4.4 Using an Externally Managed Ceph Cluster

The following chapter provides instructions on using an external Ceph cluster in SUSE OpenStack Cloud Crowbar.

11.4.4.1 Requirements

Ceph Release

External Ceph cluster are supported with SUSE Enterprise Storage 5 or higher. The version of Ceph should be compatible with the version of the Ceph client supplied with SUSE Linux Enterprise Server 12 SP4.

Network Configuration

The external Ceph cluster needs to be connected to a separate VLAN, which is mapped to the SUSE OpenStack Cloud storage VLAN. See Section 2.1, “Network” for more information.

11.4.4.2 Making Ceph Available on the SUSE OpenStack Cloud Nodes

Ceph can be used from the KVM Compute Nodes, with cinder, and with glance. The following installation steps need to be executed on each node accessing Ceph:

Important
Important: Installation Workflow

The following steps need to be executed before the barclamps get deployed.

  1. Log in as user root to a machine in the Ceph cluster and generate keyring files for cinder users. Optionally, you can generate keyring files for the glance users (only needed when using glance with Ceph/Rados). The keyring file that will be generated for cinder will also be used on the Compute Nodes. To do so, you need to specify pool names and user names for both services. The default names are:

    glance

    cinder

    User

    glance

    cinder

    Pool

    images

    volumes

    Make a note of user and pool names in case you do not use the default values. You will need this information later, when deploying glance and cinder.

  2. Warning
    Warning: Automatic Changes to the Cluster

    If you decide to use the admin keyring file to connect the external Ceph cluster, be aware that after Crowbar discovers this admin keyring, it will create client keyring files, pools, and capabilities needed to run glance, cinder, or nova integration.

    If you have access to the admin keyring file and agree that automatic changes will be done to the cluster as described above, copy it together with the Ceph configuration file to the Administration Server. If you cannot access this file, create a keyring:

    1. When you can access the admin keyring file ceph.client.admin.keyring, copy it together with ceph.conf (both files are usually located in /etc/ceph) to a temporary location on the Administration Server, for example /root/tmp/.

    2. If you cannot access the admin keyring file create a new keyring file with the following commands. Re-run the commands for glance, too, if needed. First create a key:

      ceph auth get-or-create-key client.USERNAME mon "allow r" \
      osd 'allow class-read object_prefix rbd_children, allow rwx \
      pool=POOLNAME'

      Replace USERNAME and POOLNAME with the respective values.

      Now use the key to generate the keyring file /etc/ceph/ceph.client.USERNAME.keyring:

      ceph-authtool \
      /etc/ceph/ceph.client.USERNAME.keyring \
      --create-keyring --name=client.USERNAME> \
      --add-key=KEY

      Replace USERNAME with the respective value.

      Copy the Ceph configuration file ceph.conf (usually located in /etc/ceph) and the keyring file(s) generated above to a temporary location on the Administration Server, for example /root/tmp/.

  3. Log in to the Crowbar Web interface and check whether the nodes which should have access to the Ceph cluster already have an IP address from the storage network. Do so by going to the Dashboard and clicking the node name. An IP address should be listed for storage. Make a note of the Full name of each node that has no storage network IP address.

  4. Log in to the Administration Server as user root and run the following command for all nodes you noted down in the previous step:

    crowbar network allocate_ip "default" NODE "storage" "host"
    chef-client

    NODE needs to be replaced by the node's name.

  5. After executing the command in the previous step for all affected nodes, run the command chef-client on the Administration Server.

  6. Log in to each affected node as user root. See How can I log in to a node as root? for instructions. On each node, do the following:

    1. Manually install nova, cinder (if using cinder) and/or glance (if using glance) packages with the following commands:

      zypper in openstack-glance
      zypper in openstack-cinder
      zypper in openstack-nova
    2. Copy the ceph.conf file from the Administration Server to /etc/ceph:

      mkdir -p /etc/ceph
      scp root@admin:/root/tmp/ceph.conf /etc/ceph
      chmod 664 /etc/ceph/ceph.conf
    3. Copy the keyring file(s) to /etc/ceph. The exact process depends on whether you have copied the admin keyring file or whether you have created your own keyrings:

      1. If you have copied the admin keyring file, run the following command on the Control Node(s) on which cinder and glance will be deployed, and on all KVM Compute Nodes:

        scp root@admin:/root/tmp/ceph.client.admin.keyring /etc/ceph
        chmod 640 /etc/ceph/ceph.client.admin.keyring
      2. If you have created you own keyrings, run the following command on the Control Node on which cinder will be deployed, and on all KVM Compute Nodes to copy the cinder keyring:

        scp root@admin:/root/tmp/ceph.client.cinder.keyring /etc/ceph
        chmod 640 /etc/ceph/ceph.client.cinder.keyring

        On Control Node on which cinder will be deployed run the following command to update file ownership:

        chown root.cinder /etc/ceph/ceph.client.cinder.keyring

        On KVM Compute Nodes run the following command to update file ownership:

        chown root.nova /etc/ceph/ceph.client.cinder.keyring

        Now copy the glance keyring to the Control Node on which glance will be deployed:

        scp root@admin:/root/tmp/ceph.client.glance.keyring /etc/ceph
        chmod 640 /etc/ceph/ceph.client.glance.keyring
        chown root.glance /etc/ceph/ceph.client.glance.keyring

11.4.5 Accessing the Nodes

The nodes can only be accessed via SSH from the Administration Server—it is not possible to connect to them from any other host in the network.

The root account on the nodes has no password assigned, therefore logging in to a node as root@node is only possible via SSH with key authentication. By default, you can only log in with the key of the root of the Administration Server (root@admin) via SSH only.

If you have added users to the Administration Server and want to give them permission to log in to the nodes as well, you need to add these users' public SSH keys to root's authorized_keys file on all nodes. Proceed as follows:

Procedure 11.1: Copying SSH Keys to All Nodes
  1. If they do not already exist, generate an SSH key pair with ssh-keygen. This key pair belongs to the user that you use to log in to the nodes. Alternatively, copy an existing public key with ssh-copy-id. Refer to the respective man pages for more information.

  2. Log in to the Crowbar Web interface on the Administration Server, for example http://192.168.124.10/ (user name and default password: crowbar).

  3. Open the barclamp menu by clicking Barclamps › Crowbar. Click the Provisioner barclamp entry and Edit the Default proposal.

  4. Copy and paste the public SSH key of the user into the Additional SSH Keys text box. If adding keys for multiple users, note that each key needs to be placed on a new line.

  5. Click Apply to deploy the keys and save your changes to the proposal.

11.4.6 Enabling SSL

To enable SSL to encrypt communication within the cloud (see Section 2.3, “SSL Encryption” for details), all nodes running encrypted services need SSL certificates. An SSL certificate is, at a minimum, required on the Control Node.

Each certificate consists of a pair of files: the certificate file (for example, signing_cert.pem) and the key file (for example, signing_key.pem). If you use your own certificate authority (CA) for signing, you will also need a certificate file for the CA (for example, ca.pem). We recommend copying the files to the /etc directory using the directory structure outlined below. If you use a dedicated certificate for each service, create directories named after the services (for example, /etc/keystone). If you are using shared certificates, use a directory such as /etc/cloud.

Recommended Locations for Shared Certificates
SSL Certificate File

/etc/cloud/ssl/certs/signing_cert.pem

SSL Key File

/etc/cloud/private/signing_key.pem

CA Certificates File

/etc/cloud/ssl/certs/ca.pem

11.5 Editing Allocated Nodes

All nodes that have been allocated can be decommissioned or re-installed. Click a node's name in the Node Dashboard to open a screen with the node details. The following options are available:

Forget

Deletes a node from the pool. If you want to re-use this node again, it needs to be reallocated and re-installed from scratch.

Reinstall

Triggers a reinstallation. The machine stays allocated. Any barclamps that were deployed on the machine will be re-applied after the installation.

Deallocate

Temporarily removes the node from the pool of nodes. After you reallocate the node it will take its former role. Useful for adding additional machines in times of high load or for decommissioning machines in times of low load.

Power Actions › Reboot

Reboots the node.

Power Actions › Shutdown

Shuts the node down.

Power Actions › Power Cycle

Forces a (non-clean) shuts down and a restart afterward. Only use if a reboot does not work.

Power Actions › Power Off

Forces a (non-clean) node shut down. Only use if a clean shut down does not work.

Node Information
Figure 11.11: Node Information
Warning
Warning: Editing Nodes in a Production System

When de-allocating nodes that provide essential services, the complete cloud will become unusable. If you have not disabled redundancy, you can disable single storage nodes or single compute nodes. However, disabling Control Node(s) will cause major problems. It will either kill certain services (for example swift) or, at worst the complete cloud (when deallocating the Control Node hosting neutron). You should also not disable the nodes providing swift ring and proxy services.