Applies to HPE Helion OpenStack 8

23 Integrations

Once you have completed your cloud installation, these are some of the common integrations you may want to perform.

23.1 Configuring for 3PAR Block Storage Backend

This page describes how to configure your 3PAR backend for the HPE Helion OpenStack Entry-scale with KVM cloud model.

23.1.1 Prerequisites

  • You must have the license for the following software before you start your 3PAR backend configuration for the HPE Helion OpenStack Entry-scale with KVM cloud model:

    • Thin Provisioning

    • Virtual Copy

    • System Reporter

    • Dynamic Optimization

    • Priority Optimization

  • Your HPE Helion OpenStack Entry-scale KVM Cloud should be up and running. Installation steps can be found in Chapter 12, Installing Mid-scale and Entry-scale KVM.

  • Your 3PAR Storage Array should be available in the cloud management network or routed to the cloud management network and the 3PAR FC and iSCSI ports configured.

  • The 3PAR management IP and iSCSI port IPs must have connectivity from the controller and compute nodes.

  • Please refer to the system requirements for 3PAR in the OpenStack configuration guide, which can be found here: 3PAR System Requirements.

23.1.2 Notes

The cinder_admin role must be added in order to configure 3Par ICSI as a volume type in Horizon.

ardana > source ~/service.osrc
ardana > openstack role add --user admin --project admin cinder_admin

Encrypted 3Par Volume: Attaching an encrypted 3Par volume is possible after installation by setting volume_use_multipath = true under the libvirt stanza in the nova/kvm-hypervisor.conf.j2 file and reconfigure nova.

Concerning using multiple backends: If you are using multiple backend options, ensure that you specify each of the backends you are using when configuring your cinder.conf.j2 file using a comma-delimited list. Also create multiple volume types so you can specify a backend to utilize when creating volumes. Instructions are included below. You can also read the OpenStack documentation about Cinder multiple storage backends.

Concerning iSCSI and Fiber Channel: You should not configure cinder backends so that multipath volumes are exported over both iSCSI and Fiber Channel from a 3PAR backend to the same Nova compute server.

3PAR driver correct name: In a previous release, the 3PAR driver used for HPE Helion OpenStack integration had its name updated from HP3PARFCDriver and HP3PARISCSIDriver to HPE3PARFCDriver and HPE3PARISCSIDriver respectively (HP changed to HPE). You may get a warning or an error if the deprecated filenames are used. The correct values are those in ~/openstack/my_cloud/config/cinder/cinder.conf.j2.

23.1.3 Multipath Support

Setting up multipath support is highly recommended for 3PAR FC/iSCSI backends, and should be considered a default best practice. For instructions on this process, refer to the ~/openstack/ardana/ansible/roles/multipath/README.md file on the Cloud Lifecycle Manager. The README.md file contains detailed procedures for configuring multipath for 3PAR FC/iSCSI Cinder volumes.

Warning
Warning

If multipath functionality is enabled, ensure that all 3PAR fibre channel ports are active and zoned correctly in the 3PAR storage for proper operation.

In addition, take the following steps to enable 3PAR FC/iSCSI multipath support in OpenStack configuration files:

  1. Log in to the Cloud Lifecycle Manager.

  2. Edit the ~/openstack/my_cloud/config/nova/kvm-hypervisor.conf.j2 file and add this line under the [libvirt] section:

    Example:

    [libvirt]
    ...
    iscsi_use_multipath=true

    Additionally, if you are planning on attaching an encrypted 3PAR volume after installation, set volume_use_multipath=true in the same section.

  3. Edit the file ~/openstack/my_cloud/config/cinder/cinder.conf.j2 and add the following lines in the [3par] section:

    Example:

    [3par]
    ...
    enforce_multipath_for_image_xfer=True
    use_multipath_for_image_xfer=True
  4. Commit your configuration to the local git repo (Chapter 10, Using Git for Configuration Management), as follows:

    cd ~/openstack/ardana/ansible
    git add -A
    git commit -m "My config or other commit message"
  5. Run the configuration processor:

    cd ~/openstack/ardana/ansible
    ansible-playbook -i hosts/localhost config-processor-run.yml
  6. Use the playbook below to create a deployment directory:

    cd ~/openstack/ardana/ansible
    ansible-playbook -i hosts/localhost ready-deployment.yml
  7. Run the Nova reconfigure playbook:

    cd ~/scratch/ansible/next/ardana/ansible
    ansible-playbook -i hosts/verb_hosts nova-reconfigure.yml

23.1.4 Configure 3PAR FC as a Cinder Backend

You must modify the cinder.conf.j2 to configure the FC details.

Perform the following steps to configure 3PAR FC as Cinder backend:

  1. Log in to Cloud Lifecycle Manager.

  2. Make the following changes to the ~/openstack/my_cloud/config/cinder/cinder.conf.j2 file:

    1. Add your 3PAR backend to the enabled_backends section:

      # Configure the enabled backends
      enabled_backends=3par_FC
      Important
      Important

      If you are using multiple backend types, you can use a comma-delimited list here.

    2. [OPTIONAL] If you want your volumes to use a default volume type, then enter the name of the volume type in the [DEFAULT] section with the syntax below. Remember this value for when you create your volume type in the next section.

      Important
      Important

      If you do not specify a default type then your volumes will default unpredictably. We recommended that you create a volume type that meets the needs of your environment and specify it here.

      [DEFAULT]
      # Set the default volume type
      default_volume_type = <your new volume type>
    3. Uncomment the StoreServ (3par) iscsi cluster section and fill the values per your cluster information. Storage performance can be improved by enabling the Image-Volume cache. Here is an example:

      [3par_FC]
      san_ip: <3par-san-ipaddr>
      san_login: <3par-san-username>
      san_password: <3par-san-password>
      hpe3par_username: <3par-username>
      hpe3par_password: <hpe3par_password>
      hpe3par_api_url: https://<3par-san-ipaddr>:8080/api/v1
      hpe3par_cpg: <3par-cpg-name-1>[,<3par-cpg-name-2>, ...]
      volume_backend_name: <3par-backend-name>
      volume_driver = cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
      image_volume_cache_enabled = True
    Important
    Important

    Do not use backend_host variable in cinder.conf file. If backend_host is set, it will override the [DEFAULT]/host value which HPE Helion OpenStack 8 is dependent on.

  3. Commit your configuration to the local git repo (Chapter 10, Using Git for Configuration Management), as follows:

    cd ~/openstack/ardana/ansible
    git add -A
    git commit -m "My config or other commit message"
  4. Run the configuration processor:

    cd ~/openstack/ardana/ansible
    ansible-playbook -i hosts/localhost config-processor-run.yml
  5. Update your deployment directory:

    cd ~/openstack/ardana/ansible
    ansible-playbook -i hosts/localhost ready-deployment.yml
  6. Run the following playbook to complete the configuration:

    cd ~/scratch/ansible/next/ardana/ansible
    ansible-playbook -i hosts/verb_hosts cinder-reconfigure.yml

23.1.5 Configure 3PAR iSCSI as Cinder backend

You must modify the cinder.conf.j2 to configure the iSCSI details.

Perform the following steps to configure 3PAR iSCSI as Cinder backend:

  1. Log in to Cloud Lifecycle Manager.

  2. Make the following changes to the ~/openstack/my_cloud/config/cinder/cinder.conf.j2 file:

    1. Add your 3PAR backend to the enabled_backends section:

      # Configure the enabled backends
      enabled_backends=3par_iSCSI
    2. Uncomment the StoreServ (3par) iscsi cluster section and fill the values per your cluster information. Here is an example:

      [3par_iSCSI]
      san_ip: <3par-san-ipaddr>
      san_login: <3par-san-username>
      san_password: <3par-san-password>
      hpe3par_username: <3par-username>
      hpe3par_password: <hpe3par_password>
      hpe3par_api_url: https://<3par-san-ipaddr>:8080/api/v1
      hpe3par_cpg: <3par-cpg-name-1>[,<3par-cpg-name-2>, ...]
      volume_backend_name: <3par-backend-name>
      volume_driver: cinder.volume.drivers.san.hp.hp_3par_iscsi.hpe3parISCSIDriver
      hpe3par_iscsi_ips: <3par-ip-address-1>[,<3par-ip-address-2>,<3par-ip-address-3>, ...]
      hpe3par_iscsi_chap_enabled=true
      Important
      Important

      Do not use backend_host variable in cinder.conf file. If backend_host is set, it will override the [DEFAULT]/host value which HPE Helion OpenStack 8 is dependent on.

  3. Commit your configuration your local git repository:

    cd ~/openstack/ardana/ansible
    git add -A
    git commit -m "<commit message>"
  4. Run the configuration processor:

    cd ~/openstack/ardana/ansible
    ansible-playbook -i hosts/localhost config-processor-run.yml

    When you run the configuration processor you will be prompted for two passwords. Enter the first password to make the configuration processor encrypt its sensitive data, which consists of the random inter-service passwords that it generates and the Ansible group_vars and host_vars that it produces for subsequent deploy runs. You will need this key for subsequent Ansible deploy runs and subsequent configuration processor runs. If you wish to change an encryption password that you have already used when running the configuration processor then enter the new password at the second prompt, otherwise press Enter.

    For CI purposes you can specify the required passwords on the ansible command line. For example, the command below will disable encryption by the configuration processor

    ansible-playbook -i hosts/localhost config-processor-run.yml \
      -e encrypt="" -e rekey=""

    If you receive an error during either of these steps then there is an issue with one or more of your configuration files. We recommend that you verify that all of the information in each of your configuration files is correct for your environment and then commit those changes to git using the instructions above.

  5. Run the following command to create a deployment directory.

    cd ~/openstack/ardana/ansible
    ansible-playbook -i hosts/localhost ready-deployment.yml
  6. Run the following command to complete the configuration:

    cd ~/scratch/ansible/next/ardana/ansible
    ansible-playbook -i hosts/verb_hosts cinder-reconfigure.yml

23.1.6 Post-Installation Tasks

After configuring 3PAR as your Block Storage backend, perform the following tasks:

23.2 Ironic HPE OneView Integration

HPE Helion OpenStack 8 supports integration of Ironic (Baremetal) service with HPE OneView using agent_pxe_oneview driver. Please refer to OpenStack Documentation for more information.

23.2.1 Prerequisites

  1. Installed HPE Helion OpenStack 8 with entry-scale-ironic-flat-network or entry-scale-ironic-multi-tenancy model.

  2. HPE OneView 3.0 instance is running and connected to management network.

  3. HPE OneView configuration is set into definition/data/ironic/ironic_config.yml (and ironic-reconfigure.yml playbook ran if needed). This should enable agent_pxe_oneview driver in ironic conductor.

  4. Managed node(s) should support PXE booting in legacy BIOS mode.

  5. Managed node(s) should have PXE boot NIC listed first. That is, embedded 1Gb NIC must be disabled (otherwise it always goes first).

23.2.2 Integrating with HPE OneView

  1. On the Cloud Lifecycle Manager, open the file ~/openstack/my_cloud/definition/data/ironic/ironic_config.yml

    ~$ cd ~/openstack
    vi my_cloud/definition/data/ironic/ironic_config.yml
  2. Modify the settings listed below:

    1. enable_oneview: should be set to "true" for HPE OneView integration

    2. oneview_manager_url: HTTPS endpoint of HPE OneView management interface, for example: https://10.0.0.10/

    3. oneview_username: HPE OneView username, for example: Administrator

    4. oneview_encrypted_password: HPE OneView password in encrypted or clear text form. The encrypted form is distinguished by presence of @ardana@ at the beginning of the string. The encrypted form can be created by running the ardanaencrypt.py program. This program is shipped as part of HPE Helion OpenStack and can be found in ~/openstack/ardana/ansible directory on Cloud Lifecycle Manager.

    5. oneview_allow_insecure_connections: should be set to "true" if HPE OneView is using self-generated certificate.

  3. Once you have saved your changes and exited the editor, add files, commit changes to local git repository, and run config-processor-run.yml and ready-deployment.yml playbooks, as described in Chapter 10, Using Git for Configuration Management.

    ~/openstack$ git add my_cloud/definition/data/ironic/ironic_config.yml
    ~/openstack$ cd ardana/ansible
    ~/openstack/ardana/ansible$ ansible-playbook -i hosts/localhost \
      config-processor-run.yml
    ...
    ~/openstack/ardana/ansible$ ansible-playbook -i hosts/localhost \
      ready-deployment.yml
  4. Run ironic-reconfigure.yml playbook.

    $ cd ~/scratch/ansible/next/ardana/ansible/
    
    # This is needed if password was encrypted in ironic_config.yml file
    ~/scratch/ansible/next/ardana/ansible$ export ARDANA_USER_PASSWORD_ENCRYPT_KEY=your_password_encrypt_key
    ~/scratch/ansible/next/ardana/ansible$ ansible-playbook -i hosts/verb_hosts ironic-reconfigure.yml
    ...

23.2.3 Registering Node in HPE OneView

In the HPE OneView web interface:

  1. Navigate to Menu › Server Hardware. Add new Server Hardware item, using managed node IPMI IP and credentials. If this is the first node of this type being added, corresponding Server Hardware Type will be created automatically.

  2. Navigate to Menu › Server Profile Template. Add Server Profile Template. Use Server Hardware Type corresponding to node being registered. In BIOS Settings section, set Manage Boot Mode and Manage Boot Order options must be turned on:

  3. Verify that node is powered off. Power the node off if needed.

    Warning
    Warning

    HPE OneView does not support managing boot order for HPE DL servers in UEFI mode. Therefore, HPE DL servers can be only managed in Legacy BIOS mode .

23.2.4 Provisioning Ironic Node

  1. Login to the Cloud Lifecycle Manager and source respective credentials file (for example service.osrc for admin account).

  2. Review glance images with glance image list

    $ glance image list
    +--------------------------------------+--------------------------+
    | ID                                   | Name                     |
    +--------------------------------------+--------------------------+
    | c61da588-622c-4285-878f-7b86d87772da | cirros-0.3.4-x86_64      |
    +--------------------------------------+--------------------------+

    Ironic deploy images (boot image, ir-deploy-kernel, ir-deploy-ramdisk, ir-deploy-iso) are created automatically. The agent_pxe_oneview Ironic driver requires ir-deploy-kernel and ir-deploy-ramdisk images.

  3. Create node using agent_pxe_oneview driver.

    $ ironic --ironic-api-version 1.22 node-create -d agent_pxe_oneview --name test-node-1 \
      --network-interface neutron -p memory_mb=131072 -p cpu_arch=x86_64 -p local_gb=80 -p cpus=2 \
      -p 'capabilities=boot_mode:bios,boot_option:local,server_hardware_type_uri:\
         /rest/server-hardware-types/E5366BF8-7CBF-48DF-A752-8670CF780BB2,server_profile_template_uri:\
         /rest/server-profile-templates/00614918-77f8-4146-a8b8-9fc276cd6ab2' \
      -i 'server_hardware_uri=/rest/server-hardware/32353537-3835-584D-5135-313930373046' \
      -i dynamic_allocation=True \
      -i deploy_kernel=633d379d-e076-47e6-b56d-582b5b977683 \
      -i deploy_ramdisk=d5828785-edf2-49fa-8de2-3ddb7f3270d5
    
    +-------------------+--------------------------------------------------------------------------+
    | Property          | Value                                                                    |
    +-------------------+--------------------------------------------------------------------------+
    | chassis_uuid      |                                                                          |
    | driver            | agent_pxe_oneview                                                        |
    | driver_info       | {u'server_hardware_uri': u'/rest/server-                                 |
    |                   | hardware/32353537-3835-584D-5135-313930373046', u'dynamic_allocation':   |
    |                   | u'True', u'deploy_ramdisk': u'd5828785-edf2-49fa-8de2-3ddb7f3270d5',     |
    |                   | u'deploy_kernel': u'633d379d-e076-47e6-b56d-582b5b977683'}               |
    | extra             | {}                                                                       |
    | name              | test-node-1                                                              |
    | network_interface | neutron                                                                  |
    | properties        | {u'memory_mb': 131072, u'cpu_arch': u'x86_64', u'local_gb': 80, u'cpus': |
    |                   | 2, u'capabilities':                                                      |
    |                   | u'boot_mode:bios,boot_option:local,server_hardware_type_uri:/rest        |
    |                   | /server-hardware-types/E5366BF8-7CBF-                                    |
    |                   | 48DF-A752-8670CF780BB2,server_profile_template_uri:/rest/server-profile- |
    |                   | templates/00614918-77f8-4146-a8b8-9fc276cd6ab2'}                         |
    | resource_class    | None                                                                     |
    | uuid              | c202309c-97e2-4c90-8ae3-d4c95afdaf06                                     |
    +-------------------+--------------------------------------------------------------------------+
    Note
    Note
    • For deployments created via Ironic/HPE OneView integration, memory_mb property must reflect physical amount of RAM installed in the managed node. That is, for a server with 128 Gb of RAM it works out to 132*1024=13072.

    • Boot mode in capabilities property must reflect boot mode used by the server, that is 'bios' for Legacy BIOS and 'uefi' for UEFI.

    • Values for server_hardware_type_uri, server_profile_template_uri and server_hardware_uri can be grabbed from browser URL field while navigating to respective objects in HPE OneView UI. URI corresponds to the part of URL which starts form the token /rest. That is, the URL https://oneview.mycorp.net/#/profile-templates/show/overview/r/rest/server-profile-templates/12345678-90ab-cdef-0123-012345678901 corresponds to the URI /rest/server-profile-templates/12345678-90ab-cdef-0123-012345678901.

    • Grab IDs of deploy_kernel and deploy_ramdisk from glance image list output above.

  4. Create port.

    $ ironic --ironic-api-version 1.22 port-create \
      --address aa:bb:cc:dd:ee:ff \
      --node c202309c-97e2-4c90-8ae3-d4c95afdaf06 \
      -l switch_id=ff:ee:dd:cc:bb:aa \
      -l switch_info=MY_SWITCH \
      -l port_id="Ten-GigabitEthernet 1/0/1" \
      --pxe-enabled true
    +-----------------------+----------------------------------------------------------------+
    | Property              | Value                                                          |
    +-----------------------+----------------------------------------------------------------+
    | address               | 8c:dc:d4:b5:7d:1c                                              |
    | extra                 | {}                                                             |
    | local_link_connection | {u'switch_info': u'C20DATA', u'port_id': u'Ten-GigabitEthernet |
    |                       | 1/0/1',    u'switch_id': u'ff:ee:dd:cc:bb:aa'}                 |
    | node_uuid             | c202309c-97e2-4c90-8ae3-d4c95afdaf06                           |
    | pxe_enabled           | True                                                           |
    | uuid                  | 75b150ef-8220-4e97-ac62-d15548dc8ebe                           |
    +-----------------------+----------------------------------------------------------------+
    Warning
    Warning

    Ironic Multi-Tenancy networking model is used in this example. Therefore, ironic port-create command contains information about the physical switch. HPE OneView integration can also be performed using the Ironic Flat Networking model. For more information, see Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 9 “Example Configurations”, Section 9.6 “Ironic Examples”.

  5. Move node to manageable provisioning state. The connectivity between Ironic and HPE OneView will be verified, Server Hardware Template settings validated, and Server Hardware power status retrieved from HPE OneView and set into the Ironic node.

    $ ironic node-set-provision-state test-node-1 manage
  6. Verify that node power status is populated.

    $ ironic node-show test-node-1
    +-----------------------+-----------------------------------------------------------------------+
    | Property              | Value                                                                 |
    +-----------------------+-----------------------------------------------------------------------+
    | chassis_uuid          |                                                                       |
    | clean_step            | {}                                                                    |
    | console_enabled       | False                                                                 |
    | created_at            | 2017-06-30T21:00:26+00:00                                             |
    | driver                | agent_pxe_oneview                                                     |
    | driver_info           | {u'server_hardware_uri': u'/rest/server-                              |
    |                       | hardware/32353537-3835-584D-5135-313930373046', u'dynamic_allocation':|
    |                       | u'True', u'deploy_ramdisk': u'd5828785-edf2-49fa-8de2-3ddb7f3270d5',  |
    |                       | u'deploy_kernel': u'633d379d-e076-47e6-b56d-582b5b977683'}            |
    | driver_internal_info  | {}                                                                    |
    | extra                 | {}                                                                    |
    | inspection_finished_at| None                                                                  |
    | inspection_started_at | None                                                                  |
    | instance_info         | {}                                                                    |
    | instance_uuid         | None                                                                  |
    | last_error            | None                                                                  |
    | maintenance           | False                                                                 |
    | maintenance_reason    | None                                                                  |
    | name                  | test-node-1                                                           |
    | network_interface     |                                                                       |
    | power_state           | power off                                                             |
    | properties            | {u'memory_mb': 131072, u'cpu_arch': u'x86_64', u'local_gb': 80,       |
    |                       | u'cpus': 2, u'capabilities':                                          |
    |                       | u'boot_mode:bios,boot_option:local,server_hardware_type_uri:/rest     |
    |                       | /server-hardware-types/E5366BF8-7CBF-                                 |
    |                       | 48DF-A752-86...BB2,server_profile_template_uri:/rest/server-profile-  |
    |                       | templates/00614918-77f8-4146-a8b8-9fc276cd6ab2'}                      |
    | provision_state       | manageable                                                            |
    | provision_updated_at  | 2017-06-30T21:04:43+00:00                                             |
    | raid_config           |                                                                       |
    | reservation           | None                                                                  |
    | resource_class        |                                                                       |
    | target_power_state    | None                                                                  |
    | target_provision_state| None                                                                  |
    | target_raid_config    |                                                                       |
    | updated_at            | 2017-06-30T21:04:43+00:00                                             |
    | uuid                  | c202309c-97e2-4c90-8ae3-d4c95afdaf06                                  |
    +-----------------------+-----------------------------------------------------------------------+
  7. Move node to available provisioning state. The Ironic node will be reported to Nova as available.

    $ ironic node-set-provision-state test-node-1 provide
  8. Verify that node resources were added to Nova hypervisor stats.

    $ nova hypervisor-stats
    +----------------------+--------+
    | Property             | Value  |
    +----------------------+--------+
    | count                | 1      |
    | current_workload     | 0      |
    | disk_available_least | 80     |
    | free_disk_gb         | 80     |
    | free_ram_mb          | 131072 |
    | local_gb             | 80     |
    | local_gb_used        | 0      |
    | memory_mb            | 131072 |
    | memory_mb_used       | 0      |
    | running_vms          | 0      |
    | vcpus                | 2      |
    | vcpus_used           | 0      |
    +----------------------+--------+
  9. Create Nova flavor.

    $ nova flavor-create m1.ironic auto 131072 80 2
    +-------------+-----------+--------+------+-----------+------+-------+-------------+-----------+
    | ID          | Name      | Mem_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
    +-------------+-----------+--------+------+-----------+------+-------+-------------+-----------+
    | 33c8...f8d8 | m1.ironic | 131072 | 80   | 0         |      | 2     | 1.0         | True      |
    +-------------+-----------+--------+------+-----------+------+-------+-------------+-----------+
    $ nova flavor-key m1.ironic set capabilities:boot_mode="bios"
    $ nova flavor-key m1.ironic set capabilities:boot_option="local"
    $ nova flavor-key m1.ironic set cpu_arch=x86_64
    Note
    Note

    All parameters (specifically, amount of RAM and boot mode) must correspond to ironic node parameters.

  10. Create Nova keypair if needed.

    $ nova keypair-add ironic_kp --pub-key ~/.ssh/id_rsa.pub
  11. Boot Nova instance.

    $ nova boot --flavor m1.ironic --image d6b5...e942 --key-name ironic_kp \
      --nic net-id=5f36...dcf3 test-node-1
    +-------------------------------+-----------------------------------------------------+
    | Property                      | Value                                               |
    +-------------------------------+-----------------------------------------------------+
    | OS-DCF:diskConfig             | MANUAL                                              |
    | OS-EXT-AZ:availability_zone   |                                                     |
    | OS-EXT-SRV-ATTR:host          | -                                                   |
    | OS-EXT-SRV-ATTR:              |                                                     |
    |       hypervisor_hostname     | -                                                   |
    | OS-EXT-SRV-ATTR:instance_name |                                                     |
    | OS-EXT-STS:power_state        | 0                                                   |
    | OS-EXT-STS:task_state         | scheduling                                          |
    | OS-EXT-STS:vm_state           | building                                            |
    | OS-SRV-USG:launched_at        | -                                                   |
    | OS-SRV-USG:terminated_at      | -                                                   |
    | accessIPv4                    |                                                     |
    | accessIPv6                    |                                                     |
    | adminPass                     | pE3m7wRACvYy                                        |
    | config_drive                  |                                                     |
    | created                       | 2017-06-30T21:08:42Z                                |
    | flavor                        | m1.ironic (33c81884-b8aa-46...3b72f8d8)             |
    | hostId                        |                                                     |
    | id                            | b47c9f2a-e88e-411a-abcd-6172aea45397                |
    | image                         | Ubuntu Trusty 14.04 BIOS (d6b5d971-42...5f2d88e942) |
    | key_name                      | ironic_kp                                           |
    | metadata                      | {}                                                  |
    | name                          | test-node-1                                         |
    | os-extended-volumes:          |                                                     |
    |       volumes_attached        | []                                                  |
    | progress                      | 0                                                   |
    | security_groups               | default                                             |
    | status                        | BUILD                                               |
    | tenant_id                     | c8573f7026d24093b40c769ca238fddc                    |
    | updated                       | 2017-06-30T21:08:42Z                                |
    | user_id                       | 2eae99221545466d8f175eeb566cc1b4                    |
    +-------------------------------+-----------------------------------------------------+

    During nova instance boot, the following operations will be performed by Ironic via HPE OneView REST API.

    • In HPE OneView, new Server Profile is generated for specified Server Hardware, using specified Server Profile Template. Boot order in Server Profile is set to list PXE as the first boot source.

    • The managed node is powered on and boots IPA image from PXE.

    • IPA image writes user image onto disk and reports success back to Ironic.

    • Ironic modifies Server Profile in HPE OneView to list 'Disk' as default boot option.

    • Ironic reboots the node (via HPE OneView REST API call).

23.3 SUSE Enterprise Storage Integration

The current version of HPE Helion OpenStack supports integration with SUSE Enterprise Storage (SES). Integrating SUSE Enterprise Storage enables Ceph block storage as well as object and image storage services in HPE Helion OpenStack.

23.3.1 Enabling SUSE Enterprise Storage Integration

The SUSE Enterprise Storage integration is provided through the ardana-ses RPM package. This package is included in the patterns-cloud-ardana pattern, its installation is covered in Chapter 3, Installing the Cloud Lifecycle Manager server. The update repositories and the installation covered there are required to support SUSE Enterprise Storage integration. The latest updates should be applied before proceeding.

23.3.2 Configuration

After the SUSE Enterprise Storage integration package has been installed, it must be configured. Files that contain relevant SUSE Enterprise Storage/Ceph deployment information must be placed into a directory on the deployer node. This includes the configuration file that describes various aspects of the Ceph environment as well as keyrings for each user and pool created in the Ceph environment. In addition to that, you need to edit the settings.yml file to enable the SUSE Enterprise Storage integration to run and update all of the HPE Helion OpenStack service configuration files.

The settings.yml file resides in the ~/openstack/my_cloud/config/ses/ directory. Open the file for editing, uncomment the ses_config_path: parameter to specify the location on the deployer host containing the ses_config.yml file and all Ceph keyring files. For example:

# the directory where the ses config files are.
ses_config_path: /var/lib/ardana/openstack/my_cloud/config/ses/
ses_config_file: ses_config.yml

# Allow nova libvirt images_type to be set to rbd?
# Set this to false, if you only want rbd_user and rbd_secret to be set
# in the [libvirt] section of hypervisor.conf
ses_nova_set_images_type: True

# The unique uuid for use with virsh for cinder and nova
ses_secret_id: 457eb676-33da-42ec-9a8c-9293d545c337
Important
Important

If you are integrating with SUSE Enterprise Storage and do not want to store Nova images in Ceph, change the line in settings.yml from ses_nova_set_images_type: True to ses_nova_set_images_type: False

For security reasons, you should use a unique UUID in the settings.yml file for ses_secret_id, replacing the fixed, hard-coded UUID in that file. You can generate a UUID that will be unique to your deployment using the command uuidgen.

For SES deployments that have version 5.5 and higher, there is a Salt runner that can create all the users, keyrings, and pools. It will also generate a yaml configuration that is needed to integrate with SUSE OpenStack Cloud. The integration runner will create the cinder, cinder-backup, and glance Ceph users. Both the Cinder and Nova services will have the same user, as Cinder needs access to create objects that Nova uses.

Login in as root to run the SES 5.5 Salt runner on the salt admin host.

root # salt-run --out=yaml openstack.integrate prefix=mycloud

The prefix parameter allows pools to be created with the specified prefix. In this way, multiple cloud deployments can use different users and pools on the same SES deployment.

The sample yaml output:

ceph_conf:
 cluster_network: 10.84.56.0/21
 fsid: d5d7c7cb-5858-3218-a36f-d028df7b0673
 mon_host: 10.84.56.8, 10.84.56.9, 10.84.56.7
 mon_initial_members: ses-osd1, ses-osd2, ses-osd3
 public_network: 10.84.56.0/21
 cinder:
 key: AQCdfIRaxefEMxAAW4zp2My/5HjoST2Y8mJg8A==
 rbd_store_pool: mycloud-cinder
 rbd_store_user: cinder
 cinder-backup:
 key: AQBb8hdbrY2bNRAAqJC2ZzR5Q4yrionh7V5PkQ==
 rbd_store_pool: mycloud-backups
 rbd_store_user: cinder-backup
 glance:
 key: AQD9eYRachg1NxAAiT6Hw/xYDA1vwSWLItLpgA==
 rbd_store_pool: mycloud-glance
 rbd_store_user: glance
 nova:
 rbd_store_pool: mycloud-nova
 radosgw_urls:
     - http://10.84.56.7:80/swift/v1
     - http://10.84.56.8:80/swift/v1

After you have run the openstack.integrate runner, copy the yaml output into the new ses_config.yml file, and save this file in the path specified in the settings.yml file on the deployer node. In this case, the file ses_config.yml must be saved in the /var/lib/ardana/openstack/my_cloud/config/ses/ directory on the deployer node.

For SUSE Enterprise Storage/Ceph deployments that have version older than 5.5, the following applies. For Ceph, it is necessary to create pools and users to allow the HPE Helion OpenStack services to use the SUSE Enterprise Storage/Ceph cluster. Pools and users must be created for the Cinder, Cinder backup, and Glance services. Both the Cinder and Nova services must have the same user, as Cinder needs access to create objects that Nova uses. Instructions for creating and managing pools, users and keyrings is covered in the SUSE Enterprise Storage documentation under Key Management.

Example of ses_config.yml:

ses_cluster_configuration:
    ses_cluster_name: ceph
    ses_radosgw_url: "https://192.168.56.8:8080/swift/v1"

    conf_options:
        ses_fsid: d5d7c7cb-5858-3218-a36f-d028df7b1111
        ses_mon_initial_members: ses-osd2, ses-osd3, ses-osd1
        ses_mon_host: 192.168.56.8, 192.168.56.9, 192.168.56.7
        ses_public_network: 192.168.56.0/21
        ses_cluster_network: 192.168.56.0/21

    cinder:
        rbd_store_pool: cinder
        rbd_store_pool_user: cinder
        keyring_file_name: ceph.client.cinder.keyring

    cinder-backup:
        rbd_store_pool: backups
        rbd_store_pool_user: cinder_backup
        keyring_file_name: ceph.client.cinder-backup.keyring

    # Nova uses the cinder user to access the nova pool, cinder pool
    # So all we need here is the nova pool name.
    nova:
        rbd_store_pool: nova

    glance:														
        rbd_store_pool: glance												
        rbd_store_pool_user: glance
        keyring_file_name: ceph.client.glance.keyring

Example contents of the directory specified in settings.yml file:

ardana > ~/openstack/my_cloud/config/ses> ls -al
ceph.client.cinder-backup.keyring											
ceph.client.cinder.keyring												
ceph.client.glance.keyring					
ses_config.yml

Modify the glance_default_store option in ~/openstack/my_cloud/definition/data/control_plane.yml:

.
.
            - rabbitmq
#            - glance-api
            - glance-api:
                glance_default_store: 'rbd'
            - glance-registry
            - glance-client
  1. Commit your configuration to your local git repo:

    ardana > cd ~/openstack/ardana/ansible
    ardana > git add -A
    ardana > git commit -m "add SES integration"
  2. Run the configuration processor.

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost config-processor-run.yml
  3. Create a deployment directory.

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost ready-deployment.yml
  4. Run a series of reconfiguration playbooks.

    ardana > cd ~/scratch/ansible/next/ardana/ansible
    ardana > ansible-playbook -i hosts/verb_hosts ses-deploy.yml
    ardana > ansible-playbook -i hosts/verb_hosts cinder-reconfigure.yml
    ardana > ansible-playbook -i hosts/verb_hosts glance-reconfigure.yml
    ardana > ansible-playbook -i hosts/verb_hosts nova-reconfigure.yml

Configuring SUSE Enterprise Storage for Integration with RADOS Gateway

RADOS gateway integration can be enabled (disabled) by adding (removing) the following line in the ses_config.yml:

ses_radosgw_url: "https://192.168.56.8:8080/swift/v1"

If RADOS gateway integration is enabled, additional SUSE Enterprise Storage configuration is needed. RADOS gateway must be configured to use Keystone for authentication. This is done by adding the configuration statements below to the rados section of ceph.conf on the RADOS node.

[client.rgw.HOSTNAME]
rgw frontends = "civetweb port=80+443s"
rgw enable usage log = true
rgw keystone url = KEYSTONE_ENDPOINT (for example:
https://192.168.24.204:5000)
rgw keystone admin user = KEYSTONE_ADMIN_USER
rgw keystone admin password = KEYSTONE_ADMIN_PASSWORD
rgw keystone admin project = KEYSTONE_ADMIN_PROJECT
rgw keystone admin domain = KEYSTONE_ADMIN_DOMAIN
rgw keystone api version = 3
rgw keystone accepted roles = admin,Member,_member_
rgw keystone accepted admin roles = admin
rgw keystone revocation interval = 0
rgw keystone verify ssl = false # If keystone is using self-signed
   certificate

After making these changes to ceph.conf, the RADOS gateway service needs to be restarted.

Enabling RADOS gateway replaces the existing Object Storage endpoint with the RADOS gateway endpoint.

Enabling HTTPS, Creating and Importing a Certificate

SUSE Enterprise Storage integration uses the HTTPS protocol to connect to the RADOS gateway. However, with SUSE Enterprise Storage 5, HTTPS is not enabled by default. To enable the gateway role to communicate securely using SSL, you need to either have a CA-issued certificate or create a self-signed one. Instructions for both are available in the SUSE Enterprise Storage documentation.

The certificate needs to be installed on your Cloud Lifecycle Manager. On the Cloud Lifecycle Manager, copy the cert to /tmp/ardana_tls_cacerts. Then deploy it.

ardana > cd ~/scratch/ansible/next/ardana/ansible
ardana > ansible-playbook -i hosts/verb_hosts tls-trust-deploy.yml
ardana > ansible-playbook -i hosts/verb_hosts tls-reconfigure.yml

When creating the certificate, the subjectAltName must match the ses_radosgw_url entry in ses_config.yml. Either an IP address or FQDN can be used, but these values must be the same in both places.

23.3.3 Deploying SUSE Enterprise Storage Configuration for RADOS Integration

The following steps will deploy your configuration.

  1. Commit your configuration to your local git repo.

    ardana > cd ~/openstack/ardana/ansible
    ardana > git add -A
    ardana > git commit -m "add SES integration"
  2. Run the configuration processor.

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost config-processor-run.yml
  3. Create a deployment directory.

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost ready-deployment.yml
  4. Run a series of reconfiguration playbooks.

    ardana > cd ~/scratch/ansible/next/ardana/ansible
    ardana > ansible-playbook -i hosts/verb_hosts ses-deploy.yml
    ardana > ansible-playbook -i hosts/verb_hosts cinder-reconfigure.yml
    ardana > ansible-playbook -i hosts/verb_hosts glance-reconfigure.yml
    ardana > ansible-playbook -i hosts/verb_hosts nova-reconfigure.yml
  5. Reconfigure the Cloud Lifecycle Manager to complete the deployment.

    ardana > cd ~/scratch/ansible/next/ardana/ansible
    ardana > ansible-playbook -i hosts/verb_hosts ardana-reconfigure.yml

23.3.4 Enable Copy-On-Write Cloning of Images

Due to a security issue described in http://docs.ceph.com/docs/master/rbd/rbd-openstack/?highlight=uuid#enable-copy-on-write-cloning-of-images, we do not recommend the copy-on-write cloning of images when Glance and Cinder are both using a Ceph back-end. However, if you want to use this feature for faster operation, you can enable it as follows.

  1. Open the ~/openstack/my_cloud/config/glance/glance-api.conf.j2 file for editing and add show_image_direct_url = True under the [DEFAULT] section.

  2. Commit changes:

    git add -A
    git commit -m "Enable Copy-on-Write Cloning"
  3. Run the required playbooks:

    ardana > ansible-playbook -i hosts/localhost config-processor-run.yml
    ardana > ansible-playbook -i hosts/localhost ready-deployment.yml
    cd /var/lib/ardana/scratch/ansible/next/ardana/ansible
    ardana > ansible-playbook -i hosts/verb_hosts glance-reconfigure.yml
Warning
Warning

Note that this exposes the back-end location via Glance's API, so the end-point should not be publicly accessible when Copy-On-Write image cloning is enabled.

23.3.5 Improve SUSE Enterprise Storage Storage Performance

SUSE Enterprise Storage performance can be improved with Image-Volume cache. Be aware that Image-Volume cache and Copy-on-Write cloning cannot be used for the same storage back-end. For more information, see the OpenStack documentation.

Enable Image-Volume cache with the following steps:

  1. Open the ~/openstack/my_cloud/config/cinder/cinder.conf.j2 file for editing.

  2. Add image_volume_cache_enabled = True option under the [ses_ceph] section.

  3. Commit changes:

    ardana > git add -A
    ardana > git commit -m "Enable Image-Volume cache"
  4. Run the required playbooks:

    ardana > ansible-playbook -i hosts/localhost config-processor-run.yml
    ardana > ansible-playbook -i hosts/localhost ready-deployment.yml
    ardana > cd /var/lib/ardana/scratch/ansible/next/ardana/ansible
    ardana > ansible-playbook -i hosts/verb_hosts cinder-reconfigure.yml
Print this page