Applies to HPE Helion OpenStack 8

28 UI Verification

Once you have completed your cloud deployment, these are some of the common post-installation tasks you may need to perform to verify your cloud installation.

28.1 Verifying Your Block Storage Backend

The sections below will show you the steps to verify that your Block Storage backend was setup properly.

28.1.1 Create a Volume

Perform the following steps to create a volume using Horizon dashboard.

  1. Log into the Horizon dashboard. For more information, see Book “User Guide”, Chapter 3 “Cloud Admin Actions with the Dashboard”.

  2. Choose Project › Compute › Volumes.

  3. On the Volumes tabs, click the Create Volume button to create a volume.

  4. In the Create Volume options, enter the required details into the fields and then click the Create Volume button:

    1. Volume Name - This is the name you specify for your volume.

    2. Description (optional) - This is an optional description for the volume.

    3. Type - Select the volume type you have created for your volumes from the drop down.

    4. Size (GB) - Enter the size, in GB, you would like the volume to be.

    5. Availability Zone - You can either leave this at the default option of Any Availability Zone or select a specific zone from the drop-down box.

The dashboard will then show the volume you have just created.

28.1.2 Attach Volume to an Instance

Perform the following steps to attach a volume to an instance:

  1. Log into the Horizon dashboard. For more information, see Book “User Guide”, Chapter 3 “Cloud Admin Actions with the Dashboard”.

  2. Choose Project › Compute › Instances.

  3. In the Action column, choose the Edit Attachments in the drop-down box next to the instance you want to attach the volume to.

  4. In the Attach To Instance drop-down, select the volume that you want to attach.

  5. Edit the Device Name if necessary.

  6. Click Attach Volume to complete the action.

  7. On the Volumes screen, verify that the volume you attached is displayed in the Attached To columns.

28.1.3 Detach Volume from Instance

Perform the following steps to detach the volume from instance:

  1. Log into the Horizon dashboard. For more information, see Book “User Guide”, Chapter 3 “Cloud Admin Actions with the Dashboard”.

  2. Choose Project › Compute › Instances.

  3. Click the check box next to the name of the volume you want to detach.

  4. In the Action column, choose the Edit Attachments in the drop-down box next to the instance you want to attach the volume to.

  5. Click Detach Attachment. A confirmation dialog box appears.

  6. Click Detach Attachment to confirm the detachment of the volume from the associated instance.

28.1.4 Delete Volume

Perform the following steps to delete a volume using Horizon dashboard:

  1. Log into the Horizon dashboard. For more information, see Book “User Guide”, Chapter 3 “Cloud Admin Actions with the Dashboard”.

  2. Choose Project › Compute › Volumes.

  3. In the Actions column, click Delete Volume next to the volume you would like to delete.

  4. To confirm and delete the volume, click Delete Volume again.

  5. Verify that the volume was removed from the Volumes screen.

28.1.5 Verifying Your Object Storage (Swift)

The following procedure shows how to validate that all servers have been added to the Swift rings:

  1. Run the swift-compare-model-rings.yml playbook as follows:

    cd ~/scratch/ansible/next/ardana/ansible
    ansible-playbook -i hosts/verb_hosts swift-compare-model-rings.yml
  2. Search for output similar to the following. Specifically, look at the number of drives that are proposed to be added.

    TASK: [swiftlm-ring-supervisor | validate-input-model | Print report] *********
    ok: [ardana-cp1-c1-m1-mgmt] => {
        "var": {
            "report.stdout_lines": [
                "Rings:",
                "  ACCOUNT:",
                "    ring exists",
                "    no device changes",
                "    ring will be rebalanced",
                "  CONTAINER:",
                "    ring exists",
                "    no device changes",
                "    ring will be rebalanced",
                "  OBJECT-0:",
                "    ring exists",
                "    no device changes",
                "    ring will be rebalanced"
            ]
        }
    }
  3. If the text contains "no device changes" then the deploy was successful and no further action is needed.

  4. If more drives need to be added, it indicates that the deploy failed on some nodes and that you restarted the deploy to include those nodes. However, the nodes are not in the Swift rings because enough time has not elapsed to allow the rings to be rebuilt. You have two options to continue:

    1. Repeat the deploy. There are two steps:

      1. Delete the ring builder files as described in Book “Operations Guide”, Chapter 15 “Troubleshooting Issues”, Section 15.6 “Storage Troubleshooting”, Section 15.6.2 “Swift Storage Troubleshooting”, Section 15.6.2.8 “Restarting the Object Storage Deployment”.

      2. Repeat the installation process starting by running the site.yml playbook as described in Section 12.7, “Deploying the Cloud”.

    2. Rebalance the rings several times until all drives are incorporated in the rings. This process may take several hours to complete (because you need to wait one hour between each rebalance). The steps are as follows:

      1. Change the min-part-hours to 1 hour. See Book “Operations Guide”, Chapter 8 “Managing Object Storage”, Section 8.5 “Managing Swift Rings”, Section 8.5.7 “Changing min-part-hours in Swift”.

      2. Use the "First phase of ring rebalance" and "Final rebalance phase" as described in Book “Operations Guide”, Chapter 8 “Managing Object Storage”, Section 8.5 “Managing Swift Rings”, Section 8.5.5 “Applying Input Model Changes to Existing Rings”. The Weight change phase of ring rebalance does not apply because you have not set the weight-step attribute at this stage.

      3. Set the min-part-hours to the recommended 16 hours as described in Book “Operations Guide”, Chapter 8 “Managing Object Storage”, Section 8.5 “Managing Swift Rings”, Section 8.5.7 “Changing min-part-hours in Swift”.

If you receive errors during the validation, see Book “Operations Guide”, Chapter 15 “Troubleshooting Issues”, Section 15.6 “Storage Troubleshooting”, Section 15.6.2 “Swift Storage Troubleshooting”, Section 15.6.2.3 “Interpreting Swift Input Model Validation Errors”.

28.2 Verify the Object Storage (Swift) Operations

For information about verifying the operations, see Book “Operations Guide”, Chapter 8 “Managing Object Storage”, Section 8.1 “Running the Swift Dispersion Report”.

28.3 Uploading an Image for Use

To create a Compute instance, you need to obtain an image that you can use. The Cloud Lifecycle Manager provides an Ansible playbook that will download a CirrOS Linux image, and then upload it as a public image to your image repository for use across your projects.

28.3.1 Running the Playbook

Use the following command to run this playbook:

cd ~/scratch/ansible/next/ardana/ansible
ansible-playbook -i hosts/verb_hosts glance-cloud-configure.yml -e proxy=<PROXY>

The table below shows the optional switch that you can use as part of this playbook to specify environment-specific information:

SwitchDescription

-e proxy="<proxy_address:port>"

Optional. If your environment requires a proxy for the internet, use this switch to specify the proxy information.

28.3.2 How to Curate Your Own Images

OpenStack has created a guide to show you how to obtain, create, and modify images that will be compatible with your cloud:

OpenStack Virtual Machine Image Guide

28.3.3 Using the GlanceClient CLI to Create Images

You can use the GlanceClient on a machine accessible to your cloud or on your Cloud Lifecycle Manager where it is automatically installed.

The GlanceClient allows you to create, update, list, and delete images as well as manage your image member lists, which allows you to share access to images across multiple tenants. As with most of the OpenStack CLI tools, you can use the glance help command to get a full list of commands as well as their syntax.

If you would like to use the --copy-from option when creating an image, you will need to have your Administrator enable the http store in your environment using the instructions outlined at Book “Operations Guide”, Chapter 5 “Managing Compute”, Section 5.6 “Configuring the Image Service”, Section 5.6.2 “Allowing the Glance copy-from option in your environment”.

28.4 Creating an External Network

You must have an external network set up to allow your Compute instances to reach the internet. There are multiple methods you can use to create this external network and we provide two of them here. The HPE Helion OpenStack installer provides an Ansible playbook that will create this network for use across your projects. We also show you how to create this network via the command line tool from your Cloud Lifecycle Manager.

28.4.1 Using the Ansible Playbook

This playbook will query the Networking service for an existing external network, and then create a new one if you do not already have one. The resulting external network will have the name ext-net with a subnet matching the CIDR you specify in the command below.

If you need to specify more granularity, for example specifying an allocation pool for the subnet then you should utilize the Section 28.4.2, “Using the NeutronClient CLI”.

cd ~/scratch/ansible/next/ardana/ansible
ansible-playbook -i hosts/verb_hosts neutron-cloud-configure.yml -e EXT_NET_CIDR=<CIDR>

The table below shows the optional switch that you can use as part of this playbook to specify environment-specific information:

SwitchDescription

-e EXT_NET_CIDR=<CIDR>

Optional. You can use this switch to specify the external network CIDR. If you choose not to use this switch, or use a wrong value, the VMs will not be accessible over the network.

This CIDR will be from the EXTERNAL VM network.

Note
Note

If this option is not defined the default value is "172.31.0.0/16"

28.4.2 Using the NeutronClient CLI

For more granularity you can utilize the Neutron command line tool to create your external network.

  1. Log in to the Cloud Lifecycle Manager.

  2. Source the Admin credentials:

    source ~/service.osrc
  3. Create the external network and then the subnet using these commands below.

    Creating the network:

    neutron net-create --router:external <external-network-name>

    Creating the subnet:

    neutron subnet-create <external-network-name> <CIDR> --gateway <gateway> \
    --allocation-pool start=<IP_start>,end=<IP_end> [--disable-dhcp]

    Where:

    ValueDescription
    external-network-name

    This is the name given to your external network. This is a unique value that you will choose. The value ext-net is usually used.

    CIDR

    You can use this switch to specify the external network CIDR. If you choose not to use this switch, or use a wrong value, the VMs will not be accessible over the network.

    This CIDR will be from the EXTERNAL VM network.

    --gateway

    Optional switch to specify the gateway IP for your subnet. If this is not included then it will choose the first available IP.

    --allocation-pool start end

    Optional switch to specify a start and end IP address to use as the allocation pool for this subnet.

    --disable-dhcp

    Optional switch if you want to disable DHCP on this subnet. If this is not specified then DHCP will be enabled.

28.4.3 Next Steps

Once the external network is created, users can create a Private Network to complete their networking setup. For instructions, see Book “User Guide”, Chapter 8 “Creating a Private Network”.

Print this page