35 Integrations #
Once you have completed your cloud installation, these are some of the common integrations you may want to perform.
35.1 Configuring for 3PAR Block Storage Backend #
This page describes how to configure your 3PAR backend for the SUSE OpenStack Cloud Entry-scale with KVM cloud model.
35.1.1 Prerequisites #
You must have the license for the following software before you start your 3PAR backend configuration for the SUSE OpenStack Cloud Entry-scale with KVM cloud model:
Thin Provisioning
Virtual Copy
System Reporter
Dynamic Optimization
Priority Optimization
Your SUSE OpenStack Cloud Entry-scale KVM Cloud should be up and running. Installation steps can be found in Chapter 24, Installing Mid-scale and Entry-scale KVM.
Your 3PAR Storage Array should be available in the cloud management network or routed to the cloud management network and the 3PAR FC and iSCSI ports configured.
The 3PAR management IP and iSCSI port IPs must have connectivity from the controller and compute nodes.
Please refer to the system requirements for 3PAR in the OpenStack configuration guide, which can be found here: 3PAR System Requirements.
35.1.2 Notes #
The cinder_admin
role must be added in order to configure
3Par ICSI as a volume type in horizon.
ardana >
source ~/service.osrcardana >
openstack role add --user admin --project admin cinder_admin
Encrypted 3Par Volume: Attaching an
encrypted 3Par volume is possible after installation by setting
volume_use_multipath = true
under the libvirt stanza in
the nova/kvm-hypervisor.conf.j2
file and reconfigure
nova.
Concerning using multiple backends: If you
are using multiple backend options, ensure that you specify each of the
backends you are using when configuring your
cinder.conf.j2
file using a comma-delimited list.
Also create multiple volume types so you can specify a backend to utilize
when creating volumes. Instructions are included below.
You can also read the OpenStack documentation about cinder
multiple storage backends.
Concerning iSCSI and Fiber Channel: You should not configure cinder backends so that multipath volumes are exported over both iSCSI and Fiber Channel from a 3PAR backend to the same nova compute server.
3PAR driver correct name: In a previous
release, the 3PAR driver used for SUSE OpenStack Cloud integration had its name
updated from HP3PARFCDriver
and
HP3PARISCSIDriver
to HPE3PARFCDriver
and HPE3PARISCSIDriver
respectively
(HP
changed to HPE
). You may get a
warning or an error if the deprecated filenames are used. The correct values
are those in
~/openstack/my_cloud/config/cinder/cinder.conf.j2
.
35.1.3 Multipath Support #
If multipath functionality is enabled, ensure that all 3PAR fibre channel ports are active and zoned correctly in the 3PAR storage.
We recommend setting up multipath support for 3PAR FC/iSCSI as a default
best practice. For instructions on this process, refer to the
~/openstack/ardana/ansible/roles/multipath/README.md
file on the Cloud Lifecycle Manager. The README.md
file contains
detailed procedures for configuring multipath for 3PAR FC/iSCSI cinder
volumes.
The following steps are also required to enable 3PAR FC/iSCSI multipath support in the OpenStack configuration files:
Log in to the Cloud Lifecycle Manager.
Edit the
~/openstack/my_cloud/config/nova/kvm-hypervisor.conf.j2
file and add this line under the[libvirt]
section:Example:
[libvirt] ... iscsi_use_multipath=true
If you plan to attach encrypted 3PAR volumes, also set
volume_use_multipath=true
in the same section.Edit the file
~/openstack/my_cloud/config/cinder/cinder.conf.j2
and add the following lines in the[3par]
section:Example:
[3par] ... enforce_multipath_for_image_xfer=True use_multipath_for_image_xfer=True
Commit your configuration to the local git repo (Chapter 22, Using Git for Configuration Management), as follows:
cd ~/openstack/ardana/ansible git add -A git commit -m "My config or other commit message"
Run the configuration processor:
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost config-processor-run.yml
Use the playbook below to create a deployment directory:
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost ready-deployment.yml
Run the nova reconfigure playbook:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts nova-reconfigure.yml
35.1.4 Configure 3PAR FC as a Cinder Backend #
You must modify the cinder.conf.j2
file
to configure the FC details.
Perform the following steps to configure 3PAR FC as cinder backend:
Log in to Cloud Lifecycle Manager.
Make the following changes to the
~/openstack/my_cloud/config/cinder/cinder.conf.j2
file:Add your 3PAR backend to the
enabled_backends
section:# Configure the enabled backends enabled_backends=3par_FC
If you are using multiple backend types, you can use a comma-delimited list.
- Important
A
default_volume_type
is required.Use one or the other of the following alternatives as the
volume type
to specify as thedefault_volume_type
.Use a volume type (YOUR VOLUME TYPE) that has already been created to meet the needs of your environment (see Section 8.1.2, “Creating a Volume Type for your Volumes”).
You can create an empty
volume type
calleddefault_type
with the following:ardana >
openstack volume type create --is-public True \ --description "Default volume type" default_type
In
cinder.conf.j2
, setdefault_volume_type
with one or the other of the following:[DEFAULT] # Set the default volume type default_volume_type = default_type
[DEFAULT] # Set the default volume type default_volume_type = YOUR VOLUME TYPE
Uncomment the
StoreServ (3par) iscsi cluster
section and fill the values per your cluster information. Storage performance can be improved by enabling theImage-Volume
cache. Here is an example:[3par_FC] san_ip: <3par-san-ipaddr> san_login: <3par-san-username> san_password: <3par-san-password> hpe3par_username: <3par-username> hpe3par_password: <hpe3par_password> hpe3par_api_url: https://<3par-san-ipaddr>:8080/api/v1 hpe3par_cpg: <3par-cpg-name-1>[,<3par-cpg-name-2>, ...] volume_backend_name: <3par-backend-name> volume_driver = cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver image_volume_cache_enabled = True
ImportantDo not use
backend_host
variable incinder.conf.j2
file. Ifbackend_host
is set, it will override the [DEFAULT]/host value which SUSE OpenStack Cloud 9 is dependent on.Commit your configuration to the local git repo (Chapter 22, Using Git for Configuration Management), as follows:
cd ~/openstack/ardana/ansible git add -A git commit -m "My config or other commit message"
Run the configuration processor:
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost config-processor-run.yml
Update your deployment directory:
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost ready-deployment.yml
Run the following playbook to complete the configuration:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts cinder-reconfigure.yml
35.1.5 Configure 3PAR iSCSI as Cinder backend #
You must modify the cinder.conf.j2
to configure the iSCSI
details.
Perform the following steps to configure 3PAR iSCSI as cinder backend:
Log in to Cloud Lifecycle Manager.
Make the following changes to the
~/openstack/my_cloud/config/cinder/cinder.conf.j2
file:Add your 3PAR backend to the
enabled_backends
section:# Configure the enabled backends enabled_backends=3par_iSCSI
Uncomment the
StoreServ (3par) iscsi cluster
section and fill the values per your cluster information. Here is an example:[3par_iSCSI] san_ip: <3par-san-ipaddr> san_login: <3par-san-username> san_password: <3par-san-password> hpe3par_username: <3par-username> hpe3par_password: <hpe3par_password> hpe3par_api_url: https://<3par-san-ipaddr>:8080/api/v1 hpe3par_cpg: <3par-cpg-name-1>[,<3par-cpg-name-2>, ...] volume_backend_name: <3par-backend-name> volume_driver: cinder.volume.drivers.san.hp.hp_3par_iscsi.hpe3parISCSIDriver hpe3par_iscsi_ips: <3par-ip-address-1>[,<3par-ip-address-2>,<3par-ip-address-3>, ...] hpe3par_iscsi_chap_enabled=true
ImportantDo not use
backend_host
variable incinder.conf
file. Ifbackend_host
is set, it will override the [DEFAULT]/host value which SUSE OpenStack Cloud 9 is dependent on.
Commit your configuration your local git repository:
cd ~/openstack/ardana/ansible git add -A git commit -m "<commit message>"
Run the configuration processor:
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost config-processor-run.yml
When you run the configuration processor you will be prompted for two passwords. Enter the first password to make the configuration processor encrypt its sensitive data, which consists of the random inter-service passwords that it generates and the Ansible group_vars and host_vars that it produces for subsequent deploy runs. You will need this key for subsequent Ansible deploy runs and subsequent configuration processor runs. If you wish to change an encryption password that you have already used when running the configuration processor then enter the new password at the second prompt, otherwise press Enter.
For CI purposes you can specify the required passwords on the ansible command line. For example, the command below will disable encryption by the configuration processor
ansible-playbook -i hosts/localhost config-processor-run.yml \ -e encrypt="" -e rekey=""
If you receive an error during either of these steps then there is an issue with one or more of your configuration files. We recommend that you verify that all of the information in each of your configuration files is correct for your environment and then commit those changes to git using the instructions above.
Run the following command to create a deployment directory.
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost ready-deployment.yml
Run the following command to complete the configuration:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts cinder-reconfigure.yml
35.1.6 Post-Installation Tasks #
After configuring 3PAR as your Block Storage backend, perform the following tasks:
35.2 Ironic HPE OneView Integration #
SUSE OpenStack Cloud 9 supports integration of ironic (Baremetal) service with HPE OneView using agent_pxe_oneview driver. Please refer to OpenStack Documentation for more information.
35.2.1 Prerequisites #
Installed SUSE OpenStack Cloud 9 with entry-scale-ironic-flat-network or entry-scale-ironic-multi-tenancy model.
HPE OneView 3.0 instance is running and connected to management network.
HPE OneView configuration is set into
definition/data/ironic/ironic_config.yml
(andironic-reconfigure.yml
playbook ran if needed). This should enable agent_pxe_oneview driver in ironic conductor.Managed node(s) should support PXE booting in legacy BIOS mode.
Managed node(s) should have PXE boot NIC listed first. That is, embedded 1Gb NIC must be disabled (otherwise it always goes first).
35.2.2 Integrating with HPE OneView #
On the Cloud Lifecycle Manager, open the file
~/openstack/my_cloud/definition/data/ironic/ironic_config.yml
~$ cd ~/openstack vi my_cloud/definition/data/ironic/ironic_config.yml
Modify the settings listed below:
enable_oneview
: should be set to "true" for HPE OneView integrationoneview_manager_url
: HTTPS endpoint of HPE OneView management interface, for example: https://10.0.0.10/oneview_username
: HPE OneView username, for example: Administratoroneview_encrypted_password
: HPE OneView password in encrypted or clear text form. The encrypted form is distinguished by presence of@ardana@
at the beginning of the string. The encrypted form can be created by running theardanaencrypt.py
program. This program is shipped as part of SUSE OpenStack Cloud and can be found in~/openstack/ardana/ansible
directory on Cloud Lifecycle Manager.oneview_allow_insecure_connections
: should be set to "true" if HPE OneView is using self-generated certificate.
Once you have saved your changes and exited the editor, add files, commit changes to local git repository, and run
config-processor-run.yml
andready-deployment.yml
playbooks, as described in Chapter 22, Using Git for Configuration Management.~/openstack$ git add my_cloud/definition/data/ironic/ironic_config.yml ~/openstack$ cd ardana/ansible ~/openstack/ardana/ansible$ ansible-playbook -i hosts/localhost \ config-processor-run.yml ... ~/openstack/ardana/ansible$ ansible-playbook -i hosts/localhost \ ready-deployment.yml
Run ironic-reconfigure.yml playbook.
$ cd ~/scratch/ansible/next/ardana/ansible/ # This is needed if password was encrypted in ironic_config.yml file ~/scratch/ansible/next/ardana/ansible$ export ARDANA_USER_PASSWORD_ENCRYPT_KEY=your_password_encrypt_key ~/scratch/ansible/next/ardana/ansible$ ansible-playbook -i hosts/verb_hosts ironic-reconfigure.yml ...
35.2.3 Registering Node in HPE OneView #
In the HPE OneView web interface:
Navigate to
› . Add new item, using managed node IPMI IP and credentials. If this is the first node of this type being added, corresponding will be created automatically.Navigate to
› . Add . Use corresponding to node being registered. In section, set and options must be turned on:Verify that node is powered off. Power the node off if needed.
35.2.4 Provisioning ironic Node #
Login to the Cloud Lifecycle Manager and source respective credentials file (for example
service.osrc
for admin account).Review glance images with
openstack image list
$ openstack image list +--------------------------------------+--------------------------+ | ID | Name | +--------------------------------------+--------------------------+ | c61da588-622c-4285-878f-7b86d87772da | cirros-0.3.4-x86_64 | +--------------------------------------+--------------------------+
ironic deploy images (boot image,
ir-deploy-kernel
,ir-deploy-ramdisk
,ir-deploy-iso
) are created automatically. Theagent_pxe_oneview
ironic driver requiresir-deploy-kernel
andir-deploy-ramdisk
images.Create node using
agent_pxe_oneview
driver.$ ironic --ironic-api-version 1.22 node-create -d agent_pxe_oneview --name test-node-1 \ --network-interface neutron -p memory_mb=131072 -p cpu_arch=x86_64 -p local_gb=80 -p cpus=2 \ -p 'capabilities=boot_mode:bios,boot_option:local,server_hardware_type_uri:\ /rest/server-hardware-types/E5366BF8-7CBF-48DF-A752-8670CF780BB2,server_profile_template_uri:\ /rest/server-profile-templates/00614918-77f8-4146-a8b8-9fc276cd6ab2' \ -i 'server_hardware_uri=/rest/server-hardware/32353537-3835-584D-5135-313930373046' \ -i dynamic_allocation=True \ -i deploy_kernel=633d379d-e076-47e6-b56d-582b5b977683 \ -i deploy_ramdisk=d5828785-edf2-49fa-8de2-3ddb7f3270d5 +-------------------+--------------------------------------------------------------------------+ | Property | Value | +-------------------+--------------------------------------------------------------------------+ | chassis_uuid | | | driver | agent_pxe_oneview | | driver_info | {u'server_hardware_uri': u'/rest/server- | | | hardware/32353537-3835-584D-5135-313930373046', u'dynamic_allocation': | | | u'True', u'deploy_ramdisk': u'd5828785-edf2-49fa-8de2-3ddb7f3270d5', | | | u'deploy_kernel': u'633d379d-e076-47e6-b56d-582b5b977683'} | | extra | {} | | name | test-node-1 | | network_interface | neutron | | properties | {u'memory_mb': 131072, u'cpu_arch': u'x86_64', u'local_gb': 80, u'cpus': | | | 2, u'capabilities': | | | u'boot_mode:bios,boot_option:local,server_hardware_type_uri:/rest | | | /server-hardware-types/E5366BF8-7CBF- | | | 48DF-A752-8670CF780BB2,server_profile_template_uri:/rest/server-profile- | | | templates/00614918-77f8-4146-a8b8-9fc276cd6ab2'} | | resource_class | None | | uuid | c202309c-97e2-4c90-8ae3-d4c95afdaf06 | +-------------------+--------------------------------------------------------------------------+
NoteFor deployments created via ironic/HPE OneView integration,
memory_mb
property must reflect physical amount of RAM installed in the managed node. That is, for a server with 128 Gb of RAM it works out to 132*1024=13072.Boot mode in capabilities property must reflect boot mode used by the server, that is 'bios' for Legacy BIOS and 'uefi' for UEFI.
Values for
server_hardware_type_uri
,server_profile_template_uri
andserver_hardware_uri
can be grabbed from browser URL field while navigating to respective objects in HPE OneView UI. URI corresponds to the part of URL which starts form the token/rest
. That is, the URLhttps://oneview.mycorp.net/#/profile-templates/show/overview/r/rest/server-profile-templates/12345678-90ab-cdef-0123-012345678901
corresponds to the URI/rest/server-profile-templates/12345678-90ab-cdef-0123-012345678901
.Grab IDs of
deploy_kernel
anddeploy_ramdisk
from openstack image list output above.
Create port.
$ ironic --ironic-api-version 1.22 port-create \ --address aa:bb:cc:dd:ee:ff \ --node c202309c-97e2-4c90-8ae3-d4c95afdaf06 \ -l switch_id=ff:ee:dd:cc:bb:aa \ -l switch_info=MY_SWITCH \ -l port_id="Ten-GigabitEthernet 1/0/1" \ --pxe-enabled true +-----------------------+----------------------------------------------------------------+ | Property | Value | +-----------------------+----------------------------------------------------------------+ | address | 8c:dc:d4:b5:7d:1c | | extra | {} | | local_link_connection | {u'switch_info': u'C20DATA', u'port_id': u'Ten-GigabitEthernet | | | 1/0/1', u'switch_id': u'ff:ee:dd:cc:bb:aa'} | | node_uuid | c202309c-97e2-4c90-8ae3-d4c95afdaf06 | | pxe_enabled | True | | uuid | 75b150ef-8220-4e97-ac62-d15548dc8ebe | +-----------------------+----------------------------------------------------------------+
Warningironic Multi-Tenancy networking model is used in this example. Therefore, ironic port-create command contains information about the physical switch. HPE OneView integration can also be performed using the ironic Flat Networking model. For more information, see Section 9.6, “Ironic Examples”.
Move node to manageable provisioning state. The connectivity between ironic and HPE OneView will be verified, Server Hardware Template settings validated, and Server Hardware power status retrieved from HPE OneView and set into the ironic node.
$ ironic node-set-provision-state test-node-1 manage
Verify that node power status is populated.
$ ironic node-show test-node-1 +-----------------------+-----------------------------------------------------------------------+ | Property | Value | +-----------------------+-----------------------------------------------------------------------+ | chassis_uuid | | | clean_step | {} | | console_enabled | False | | created_at | 2017-06-30T21:00:26+00:00 | | driver | agent_pxe_oneview | | driver_info | {u'server_hardware_uri': u'/rest/server- | | | hardware/32353537-3835-584D-5135-313930373046', u'dynamic_allocation':| | | u'True', u'deploy_ramdisk': u'd5828785-edf2-49fa-8de2-3ddb7f3270d5', | | | u'deploy_kernel': u'633d379d-e076-47e6-b56d-582b5b977683'} | | driver_internal_info | {} | | extra | {} | | inspection_finished_at| None | | inspection_started_at | None | | instance_info | {} | | instance_uuid | None | | last_error | None | | maintenance | False | | maintenance_reason | None | | name | test-node-1 | | network_interface | | | power_state | power off | | properties | {u'memory_mb': 131072, u'cpu_arch': u'x86_64', u'local_gb': 80, | | | u'cpus': 2, u'capabilities': | | | u'boot_mode:bios,boot_option:local,server_hardware_type_uri:/rest | | | /server-hardware-types/E5366BF8-7CBF- | | | 48DF-A752-86...BB2,server_profile_template_uri:/rest/server-profile- | | | templates/00614918-77f8-4146-a8b8-9fc276cd6ab2'} | | provision_state | manageable | | provision_updated_at | 2017-06-30T21:04:43+00:00 | | raid_config | | | reservation | None | | resource_class | | | target_power_state | None | | target_provision_state| None | | target_raid_config | | | updated_at | 2017-06-30T21:04:43+00:00 | | uuid | c202309c-97e2-4c90-8ae3-d4c95afdaf06 | +-----------------------+-----------------------------------------------------------------------+
Move node to available provisioning state. The ironic node will be reported to nova as available.
$ ironic node-set-provision-state test-node-1 provide
Verify that node resources were added to nova hypervisor stats.
$ openstack hypervisor stats show +----------------------+--------+ | Property | Value | +----------------------+--------+ | count | 1 | | current_workload | 0 | | disk_available_least | 80 | | free_disk_gb | 80 | | free_ram_mb | 131072 | | local_gb | 80 | | local_gb_used | 0 | | memory_mb | 131072 | | memory_mb_used | 0 | | running_vms | 0 | | vcpus | 2 | | vcpus_used | 0 | +----------------------+--------+
Create nova flavor.
$ openstack flavor create m1.ironic auto 131072 80 2 +-------------+-----------+--------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Mem_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +-------------+-----------+--------+------+-----------+------+-------+-------------+-----------+ | 33c8...f8d8 | m1.ironic | 131072 | 80 | 0 | | 2 | 1.0 | True | +-------------+-----------+--------+------+-----------+------+-------+-------------+-----------+ $ openstack flavor set m1.ironic set capabilities:boot_mode="bios" $ openstack flavor set m1.ironic set capabilities:boot_option="local" $ openstack flavor set m1.ironic set cpu_arch=x86_64
NoteAll parameters (specifically, amount of RAM and boot mode) must correspond to ironic node parameters.
Create nova keypair if needed.
$ openstack keypair create ironic_kp --pub-key ~/.ssh/id_rsa.pub
Boot nova instance.
$ openstack server create --flavor m1.ironic --image d6b5...e942 --key-name ironic_kp \ --nic net-id=5f36...dcf3 test-node-1 +-------------------------------+-----------------------------------------------------+ | Property | Value | +-------------------------------+-----------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR: | | | hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | pE3m7wRACvYy | | config_drive | | | created | 2017-06-30T21:08:42Z | | flavor | m1.ironic (33c81884-b8aa-46...3b72f8d8) | | hostId | | | id | b47c9f2a-e88e-411a-abcd-6172aea45397 | | image | Ubuntu Trusty 14.04 BIOS (d6b5d971-42...5f2d88e942) | | key_name | ironic_kp | | metadata | {} | | name | test-node-1 | | os-extended-volumes: | | | volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | c8573f7026d24093b40c769ca238fddc | | updated | 2017-06-30T21:08:42Z | | user_id | 2eae99221545466d8f175eeb566cc1b4 | +-------------------------------+-----------------------------------------------------+
During nova instance boot, the following operations will be performed by ironic via HPE OneView REST API.
In HPE OneView, new Server Profile is generated for specified Server Hardware, using specified Server Profile Template. Boot order in Server Profile is set to list PXE as the first boot source.
The managed node is powered on and boots IPA image from PXE.
IPA image writes user image onto disk and reports success back to ironic.
ironic modifies Server Profile in HPE OneView to list 'Disk' as default boot option.
ironic reboots the node (via HPE OneView REST API call).
35.3 SUSE Enterprise Storage Integration #
The current version of SUSE OpenStack Cloud supports integration with SUSE Enterprise Storage (SES). Integrating SUSE Enterprise Storage enables Ceph to provide RADOS Block Device (RBD), block storage, image storage, object storage via RADOS Gateway (RGW), and CephFS (file storage) in SUSE OpenStack Cloud. The following documentation outlines integration for SUSE Enterprise Storage 5 , 5.5, 6.0, and 7.0.
Support for SUSE Enterprise Storage 5 and 5.5 is deprecated. The documentation for integrating these versions is included for customers who may not yet have upgraded to newer versions of SUSE Enterprise Storage . These versions are no longer officially supported.
Integration with SUSE Enterprise Storage 5.5 is configured using the same
steps as SUSE Enterprise Storage 6.0 except that salt-api queries authenticating with the
password=
parameter should be updated to use
sharedsecret=
SUSE Enterprise Storage 6.0 uses a Salt runner that creates users and pools. Salt generates a yaml configuration that is needed to integrate with SUSE OpenStack Cloud. The integration runner creates separate users for cinder, cinder backup, and glance. Both the cinder and nova services have the same user, as cinder needs access to create objects that nova uses.
SUSE Enterprise Storage 5 uses a manual configuration that requires the creation of users and pools.
For more information on SUSE Enterprise Storage, see the https://documentation.suse.com/ses/6.
35.3.1 Enabling SUSE Enterprise Storage 7.0 Integration #
The following instructions detail integrating SUSE Enterprise Storage 7.0 with SUSE OpenStack Cloud.
Create the osd pools on the SUSE Enterprise Storage admin node (the names provided here are examples)
ceph osd pool create ses-cloud-volumes 16 && \ ceph osd pool create ses-cloud-backups 16 && \ ceph osd pool create ses-cloud-images 16 &&\ ceph osd pool create ses-cloud-vms 16
Enable the osd pools
ceph osd pool application enable ses-cloud-volumes rbd && \ ceph osd pool application enable ses-cloud-backups rbd && \ ceph osd pool application enable ses-cloud-images rbd && \ ceph osd pool application enable ses-cloud-vms rbd
Configure permissions on the SUSE OpenStack Cloud deployer
ceph-authtool -C /etc/ceph/ceph.client.ses-cinder.keyring --name client.ses-cinder --add-key $(ceph-authtool --gen-print-key) --cap mon "allow r" --cap osd "allow class-read object_prefix rbd_children, allow rwx pool=ses-cloud-volumes, allow rwx pool=ses-cloud-vms, allow rwx pool=ses-cloud-images" ceph-authtool -C /etc/ceph/ceph.client.ses-cinder-backup.keyring --name client.ses-cinder-backup --add-key $(ceph-authtool --gen-print-key) --cap mon "allow r" --cap osd "allow class-read object_prefix rbd_children, allow rwx pool=ses-cloud-cinder-backups" ceph-authtool -C /etc/ceph/ceph.client.ses-glance.keyring --name client.ses-glance --add-key $(ceph-authtool --gen-print-key) --cap mon "allow r" --cap osd "allow class-read object_prefix rbd_children, allow rwx pool=ses-cloud-images"
Import the updated keyrings into Ceph
ceph auth import -i /etc/ceph/ceph.client.ses-cinder-backup.keyring && \ ceph auth import -i /etc/ceph/ceph.client.ses-cinder.keyring && \ ceph auth import -i /etc/ceph/ceph.client.ses-glance.keyring
35.3.2 Enabling SUSE Enterprise Storage 6.0 Integration #
The following instructions detail integrating SUSE Enterprise Storage 6.0 & 5.5 with SUSE OpenStack Cloud.
Log in as root to run the SES 6 Salt runner on the salt admin host:
If no prefix is specified (as the below command shows), by default
pool names are prefixed with cloud-
and are more
generic.
root #
salt-run --out=yaml openstack.integrate
ceph_conf: cluster_network: 10.84.56.0/21 fsid: d5d7c7cb-5858-3218-a36f-d028df7b0673 mon_host: 10.84.56.8, 10.84.56.9, 10.84.56.7 mon_initial_members: ses-osd1, ses-osd2, ses-osd3 public_network: 10.84.56.0/21 cinder: key: AQBI5/xcAAAAABAAFP7ES4gl5tZ9qdLd611AmQ== rbd_store_pool: cloud-volumes rbd_store_user: cinder cinder-backup: key: AQBI5/xcAAAAABAAVSZmfeuPl3KFvJetCygUmA== rbd_store_pool: cloud-backups rbd_store_user: cinder-backup glance: key: AQBI5/xcAAAAABAALHgkBxARTZAeuoIWDsC0LA== rbd_store_pool: cloud-images rbd_store_user: glance nova: rbd_store_pool: cloud-vms radosgw_urls: - http://10.84.56.7:80/swift/v1 - http://10.84.56.8:80/swift/v1
If you perform the command with a prefix, the prefix is applied to pool names and to key names. This way, multiple cloud deployments can use different users and pools on the same SES deployment.
root #
salt-run --out=yaml openstack.integrate prefix=mycloud
ceph_conf: cluster_network: 10.84.56.0/21 fsid: d5d7c7cb-5858-3218-a36f-d028df7b0673 mon_host: 10.84.56.8, 10.84.56.9, 10.84.56.7 mon_initial_members: ses-osd1, ses-osd2, ses-osd3 public_network: 10.84.56.0/21 cinder: key: AQAM5fxcAAAAABAAIyMeLwclr+5uegp33xdiIw== rbd_store_pool: mycloud-cloud-volumes rbd_store_user: mycloud-cinder cinder-backup: key: AQAM5fxcAAAAABAAq6ZqKuMNaaJgk6OtFHMnsQ== rbd_store_pool: mycloud-cloud-backups rbd_store_user: mycloud-cinder-backup glance: key: AQAM5fxcAAAAABAAvhJjxC81IePAtnkye+bLoQ== rbd_store_pool: mycloud-cloud-images rbd_store_user: mycloud-glance nova: rbd_store_pool: mycloud-cloud-vms radosgw_urls: - http://10.84.56.7:80/swift/v1 - http://10.84.56.8:80/swift/v1
35.3.3 Enabling SUSE Enterprise Storage 7, 6, 5 Integration #
The following instructions detail integrating SUSE Enterprise Storage 7, 6, 5 with SUSE OpenStack Cloud.
The SUSE Enterprise Storage integration is provided through the ardana-ses
RPM package. This package is included in the
patterns-cloud-ardana
pattern and the installation is
covered in Chapter 15, Installing the Cloud Lifecycle Manager server. The update repositories and
the installation covered there are required to support SUSE Enterprise Storage
integration. The latest updates should be applied before proceeding.
After the SUSE Enterprise Storage integration package has been installed, it must be
configured. Files that contain relevant SUSE Enterprise Storage deployment information
must be placed into a directory on the deployer node. This includes the
configuration file that describes various aspects of the Ceph environment
as well as keyrings for each user and pool created in the Ceph
environment. In addition to that, you need to edit the
settings.yml
file to enable the SUSE Enterprise Storage integration to
run and update all of the SUSE OpenStack Cloud service configuration files.
The settings.yml
file must reside in the
~/openstack/my_cloud/config/ses/
directory. Open the
file for editing, uncomment the ses_config_path:
parameter, and specify the location on the deployer host containing the
ses_config.yml
and keyring files as the parameter's
value. After you have done that, the site.yml
and
ardana-reconfigure.yml
playbooks activate and configure
the cinder, glance, and nova
services.
For security reasons, you should use a unique UUID in the
settings.yml
file for
ses_secret_id
, replacing the fixed, hard-coded UUID in
that file. You can generate a UUID that is unique to your deployment
using the command uuidgen
.
After you have run the openstack.integrate
runner, copy
the yaml into the ses_config.yml
file on the deployer
node. Then edit the settings.yml
file to enable SUSE Enterprise Storage
integration to run and update all of the SUSE OpenStack Cloud service configuration
files. The settings.yml
file resides in the
~/openstack/my_cloud/config/ses
directory. Open the
settings.yml
file for editing, uncomment the
ses_config_path:
parameter, and specify the location on
the deployer host containing the ses_config.yml
file.
If you are integrating with SUSE Enterprise Storage and want to store nova images in Ceph, then set the following:
ses_nova_set_images_type: True
If you not want to store nova images in Ceph, the following setting is required:
ses_nova_set_images_type: False
Commit your configuration to your local git repo:
ardana >
cd ~/openstack/ardana/ansibleardana >
git add -Aardana >
git commit -m "add SES integration"Run the configuration processor:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost config-processor-run.ymlCreate a deployment directory:
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost ready-deployment.ymlRun a series of reconfiguration playbooks:
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts ses-deploy.ymlardana >
ansible-playbook -i hosts/verb_hosts cinder-reconfigure.ymlardana >
ansible-playbook -i hosts/verb_hosts glance-reconfigure.ymlardana >
ansible-playbook -i hosts/verb_hosts nova-reconfigure.ymlReconfigure the Cloud Lifecycle Manager to complete the deployment:
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts ardana-reconfigure.yml
In the control_plane.yml
file, the glance
default_store
option must be adjusted.
- glance-api: glance_default_store: 'rbd'
The following content is only relevant if you are running a standalone Ceph cluster (not SUSE Enterprise Storage) or a SUSE Enterprise Storage cluster that is before version 5.5.
For Ceph, it is necessary to create pools and users to allow the SUSE OpenStack Cloud services to use the SUSE Enterprise Storage/Ceph cluster. Pools and users must be created for cinder, cinder backup, nova and glance. Instructions for creating and managing pools, users and keyrings is covered in the SUSE Enterprise Storage documentation under https://documentation.suse.com/en-us/ses/5.5/single-html/ses-admin/#storage-cephx-keymgmt.
After the required pools and users are set up on the Ceph
cluster, you have to create a ses_config.yml
configuration file (see the example below). This file is used during
deployment to configure all of the services. The
ses_config.yml
and the keyring files should be placed
in a separate directory.
If you are integrating with SUSE Enterprise Storage and do not want to store nova images in Ceph, the following setting is required:
Edit settings.yml
and change the line
ses_nova_set_images_type: True
to ses_nova_set_images_type: False
ses_cluster_configuration: ses_cluster_name: ceph ses_radosgw_url: "https://192.168.56.8:8080/swift/v1" conf_options: ses_fsid: d5d7c7cb-5858-3218-a36f-d028df7b1111 ses_mon_initial_members: ses-osd2, ses-osd3, ses-osd1 ses_mon_host: 192.168.56.8, 192.168.56.9, 192.168.56.7 ses_public_network: 192.168.56.0/21 ses_cluster_network: 192.168.56.0/21 cinder: rbd_store_pool: cinder rbd_store_pool_user: cinder keyring_file_name: ceph.client.cinder.keyring cinder-backup: rbd_store_pool: backups rbd_store_pool_user: cinder_backup keyring_file_name: ceph.client.cinder-backup.keyring # nova uses the cinder user to access the nova pool, cinder pool # So all we need here is the nova pool name. nova: rbd_store_pool: nova glance: rbd_store_pool: glance rbd_store_pool_user: glance keyring_file_name: ceph.client.glance.keyring
The path to this directory must be specified in the
settings.yml
file, as in the example below. After
making the changes, follow the steps to complete the configuration.
settings.yml ... ses_config_path: /var/lib/ardana/ses/ ses_config_file: ses_config.yml # The unique uuid for use with virsh for cinder and nova ses_secret_id: SES_SECRET_ID
After modifying these files, commit your configuration to the local git repo. For more information, see Chapter 22, Using Git for Configuration Management.
ardana >
cd ~/openstack/ardana/ansibleardana >
git add -Aardana >
git commit -m "configure SES 5"Run the configuration processor:
ardana >
ansible-playbook -i hosts/localhost config-processor-run.ymlCreate a deployment directory:
ardana >
ansible-playbook -i hosts/localhost ready-deployment.ymlReconfigure Ardana:
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts ardana-reconfigure.yml
When configuring SUSE Enterprise Storage integration, be aware of CVE-2021-20288 relating to Unauthorized global_id re-use.
35.3.4 Add Missing Swift Endpoints #
If you deployed Cloud Lifecycle Manager using the SUSE Enterprise Storage integration without swift, the integration will not be set up properly. Swift object endpoints will be missing. Use the following process to create the necessary endpoints.
Source the keystone
rc
file to have the correct permissions to create the swift service and endpoints.ardana >
. ~/keystone.osrcCreate the swift service.
ardana >
openstack service create --name swift object-store --enableRead the RADOS gateway URL from the
ses_config.yml
file. For example:ardana >
grep http ~/ses/ses_config.yml https://ses-osd3:8080/swift/v1Create the three swift endpoints.
ardana >
openstack endpoint create --enable --region region1 swift \ admin https://ses-osd3:8080/swift/v1ardana >
openstack endpoint create --enable --region region1 swift \ public https://ses-osd3:8080/swift/v1ardana >
openstack endpoint create --enable --region region1 swift \ internal https://ses-osd3:8080/swift/v1Verify the objects in the endpoint list.
ardana >
openstack endpoint list | grep object 5313b...e9412f region1 swift object-store True public https://ses-osd3:8080/swift/v1 83faf...1eb602 region1 swift object-store True internal https://ses-osd3:8080/swift/v1 dc698...715b8c region1 swift object-store True admin https://ses-osd3:8080/swift/v1
35.3.5 Configuring SUSE Enterprise Storage for Integration with RADOS Gateway #
RADOS gateway integration can be enabled (disabled) by adding (removing)
the following line in the ses_config.yml
:
ses_radosgw_url: "https://192.168.56.8:8080/swift/v1"
If RADOS gateway integration is enabled, additional SUSE Enterprise Storage configuration is
needed. RADOS gateway must be configured to use keystone for
authentication. This is done by adding the configuration statements below
to the rados section of ceph.conf
on the RADOS node.
[client.rgw.HOSTNAME] rgw frontends = "civetweb port=80+443s" rgw enable usage log = true rgw keystone url = KEYSTONE_ENDPOINT (for example: https://192.168.24.204:5000) rgw keystone admin user = KEYSTONE_ADMIN_USER rgw keystone admin password = KEYSTONE_ADMIN_PASSWORD rgw keystone admin project = KEYSTONE_ADMIN_PROJECT rgw keystone admin domain = KEYSTONE_ADMIN_DOMAIN rgw keystone api version = 3 rgw keystone accepted roles = admin,member rgw keystone accepted admin roles = admin rgw keystone revocation interval = 0 rgw keystone verify ssl = false # If keystone is using self-signed certificate
When integrating with SUSE Enterprise Storage 7, the ceph.conf file only contains the minimal setup, the remainder of the configuration data is stored in the Ceph database.
To update the configuration for SUSE Enterprise Storage 7 use the
ceph config
CLI. It is possible to import a yaml
formatted configuration as follows
ceph config assimilate-conf -i configuration_file.yml
After making these changes to ceph.conf
, (and in the case
of SUSE Enterprise Storage 7 importing the configuration) the RADOS
gateway service needs to be restarted.
Enabling RADOS gateway replaces the existing Object Storage endpoint with the RADOS gateway endpoint.
35.3.6 Enabling HTTPS, Creating and Importing a Certificate #
SUSE Enterprise Storage integration uses the HTTPS protocol to connect to the RADOS gateway. However, with SUSE Enterprise Storage 5, HTTPS is not enabled by default. To enable the gateway role to communicate securely using SSL, you need to either have a CA-issued certificate or create a self-signed one. Instructions for both are available in the SUSE Enterprise Storage documentation.
The certificate needs to be installed on your Cloud Lifecycle Manager. On the Cloud Lifecycle Manager, copy the
cert to /tmp/ardana_tls_cacerts
. Then deploy it.
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts tls-trust-deploy.ymlardana >
ansible-playbook -i hosts/verb_hosts tls-reconfigure.yml
When creating the certificate, the subjectAltName
must
match the ses_radosgw_url
entry in
ses_config.yml
. Either an IP address or FQDN can be
used, but these values must be the same in both places.
35.3.7 Deploying SUSE Enterprise Storage Configuration for RADOS Integration #
The following steps deploy your configuration.
Commit your configuration to your local git repo.
ardana >
cd ~/openstack/ardana/ansibleardana >
git add -Aardana >
git commit -m "add SES integration"Run the configuration processor.
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost config-processor-run.ymlCreate a deployment directory.
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost ready-deployment.ymlRun a series of reconfiguration playbooks.
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts ses-deploy.ymlardana >
ansible-playbook -i hosts/verb_hosts cinder-reconfigure.ymlardana >
ansible-playbook -i hosts/verb_hosts glance-reconfigure.ymlardana >
ansible-playbook -i hosts/verb_hosts nova-reconfigure.ymlReconfigure the Cloud Lifecycle Manager to complete the deployment.
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts ardana-reconfigure.yml
35.3.8 Enable Copy-On-Write Cloning of Images #
Due to a security issue described in http://docs.ceph.com/docs/master/rbd/rbd-openstack/?highlight=uuid#enable-copy-on-write-cloning-of-images, we do not recommend the copy-on-write cloning of images when glance and cinder are both using a Ceph back-end. However, if you want to use this feature for faster operation, you can enable it as follows.
Open the
~/openstack/my_cloud/config/glance/glance-api.conf.j2
file for editing and addshow_image_direct_url = True
under the[DEFAULT]
section.Commit changes:
git add -A git commit -m "Enable Copy-on-Write Cloning"
Run the required playbooks:
ardana >
ansible-playbook -i hosts/localhost config-processor-run.ymlardana >
ansible-playbook -i hosts/localhost ready-deployment.yml cd /var/lib/ardana/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts glance-reconfigure.yml
Note that this exposes the back-end location via glance's API, so the end-point should not be publicly accessible when Copy-On-Write image cloning is enabled.
35.3.9 Improve SUSE Enterprise Storage Storage Performance #
SUSE Enterprise Storage performance can be improved with Image-Volume cache. Be aware that Image-Volume cache and Copy-on-Write cloning cannot be used for the same storage back-end. For more information, see the OpenStack documentation.
Enable Image-Volume cache with the following steps:
Open the
~/openstack/my_cloud/config/cinder/cinder.conf.j2
file for editing.Add
image_volume_cache_enabled = True
option under the[ses_ceph]
section.Commit changes:
ardana >
git add -Aardana >
git commit -m "Enable Image-Volume cache"Run the required playbooks:
ardana >
ansible-playbook -i hosts/localhost config-processor-run.ymlardana >
ansible-playbook -i hosts/localhost ready-deployment.ymlardana >
cd /var/lib/ardana/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts cinder-reconfigure.yml