Once you have completed your cloud installation, these are some of the common integrations you may want to perform.
This page describes how to configure your 3PAR backend for the HPE Helion OpenStack Entry-scale with KVM cloud model.
You must have the license for the following software before you start your 3PAR backend configuration for the HPE Helion OpenStack Entry-scale with KVM cloud model:
Thin Provisioning
Virtual Copy
System Reporter
Dynamic Optimization
Priority Optimization
Your HPE Helion OpenStack Entry-scale KVM Cloud should be up and running. Installation steps can be found in Chapter 12, Installing Mid-scale and Entry-scale KVM.
Your 3PAR Storage Array should be available in the cloud management network or routed to the cloud management network and the 3PAR FC and iSCSI ports configured.
The 3PAR management IP and iSCSI port IPs must have connectivity from the controller and compute nodes.
Please refer to the system requirements for 3PAR in the OpenStack configuration guide, which can be found here: 3PAR System Requirements.
The cinder_admin
role must be added in order to configure
3Par ICSI as a volume type in Horizon.
ardana >
source ~/service.osrcardana >
openstack role add --user admin --project admin cinder_admin
Encrypted 3Par Volume: Attaching an
encrypted 3Par volume is possible after installation by setting
volume_use_multipath = true
under the libvirt stanza in
the nova/kvm-hypervisor.conf.j2
file and reconfigure
nova.
Concerning using multiple backends: If you
are using multiple backend options, ensure that you specify each of the
backends you are using when configuring your
cinder.conf.j2
file using a comma-delimited list.
Also create multiple volume types so you can specify a backend to utilize
when creating volumes. Instructions are included below.
You can also read the OpenStack documentation about Cinder
multiple storage backends.
Concerning iSCSI and Fiber Channel: You should not configure cinder backends so that multipath volumes are exported over both iSCSI and Fiber Channel from a 3PAR backend to the same Nova compute server.
3PAR driver correct name: In a previous
release, the 3PAR driver used for HPE Helion OpenStack integration had its name
updated from HP3PARFCDriver
and
HP3PARISCSIDriver
to HPE3PARFCDriver
and HPE3PARISCSIDriver
respectively
(HP
changed to HPE
). You may get a
warning or an error if the deprecated filenames are used. The correct values
are those in
~/openstack/my_cloud/config/cinder/cinder.conf.j2
.
Setting up multipath support is highly recommended for 3PAR FC/iSCSI
backends, and should be considered a default best practice.
For instructions on this process, refer to the
~/openstack/ardana/ansible/roles/multipath/README.md
file on the Cloud Lifecycle Manager. The README.md
file contains
detailed procedures for configuring multipath for 3PAR FC/iSCSI Cinder
volumes.
If multipath functionality is enabled, ensure that all 3PAR fibre channel ports are active and zoned correctly in the 3PAR storage for proper operation.
In addition, take the following steps to enable 3PAR FC/iSCSI multipath support in OpenStack configuration files:
Log in to the Cloud Lifecycle Manager.
Edit the
~/openstack/my_cloud/config/nova/kvm-hypervisor.conf.j2
file and add this line under the [libvirt]
section:
Example:
[libvirt] ... iscsi_use_multipath=true
Additionally, if you are planning on attaching an encrypted 3PAR volume
after installation, set volume_use_multipath=true
in the same section.
Edit the file
~/openstack/my_cloud/config/cinder/cinder.conf.j2
and add the following lines in the [3par]
section:
Example:
[3par] ... enforce_multipath_for_image_xfer=True use_multipath_for_image_xfer=True
Commit your configuration to the local git repo (Chapter 10, Using Git for Configuration Management), as follows:
cd ~/openstack/ardana/ansible git add -A git commit -m "My config or other commit message"
Run the configuration processor:
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost config-processor-run.yml
Use the playbook below to create a deployment directory:
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost ready-deployment.yml
Run the Nova reconfigure playbook:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts nova-reconfigure.yml
You must modify the cinder.conf.j2
to configure the FC
details.
Perform the following steps to configure 3PAR FC as Cinder backend:
Log in to Cloud Lifecycle Manager.
Make the following changes to the
~/openstack/my_cloud/config/cinder/cinder.conf.j2
file:
Add your 3PAR backend to the enabled_backends
section:
# Configure the enabled backends enabled_backends=3par_FC
If you are using multiple backend types, you can use a comma-delimited list here.
[OPTIONAL]
If you want your volumes to use a default
volume type, then enter the name of the volume type in the
[DEFAULT]
section with the syntax below.
Remember this value for when you
create your volume type in the next section.
If you do not specify a default type then your volumes will default unpredictably. We recommended that you create a volume type that meets the needs of your environment and specify it here.
[DEFAULT] # Set the default volume type default_volume_type = <your new volume type>
Uncomment the StoreServ (3par) iscsi cluster
section
and fill the values per your cluster information. Storage performance
can be improved by enabling the Image-Volume
cache. Here is an example:
[3par_FC] san_ip: <3par-san-ipaddr> san_login: <3par-san-username> san_password: <3par-san-password> hpe3par_username: <3par-username> hpe3par_password: <hpe3par_password> hpe3par_api_url: https://<3par-san-ipaddr>:8080/api/v1 hpe3par_cpg: <3par-cpg-name-1>[,<3par-cpg-name-2>, ...] volume_backend_name: <3par-backend-name> volume_driver = cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver image_volume_cache_enabled = True
Do not use backend_host
variable in
cinder.conf
file. If backend_host
is set, it will override the [DEFAULT]/host value which HPE Helion OpenStack 8
is dependent on.
Commit your configuration to the local git repo (Chapter 10, Using Git for Configuration Management), as follows:
cd ~/openstack/ardana/ansible git add -A git commit -m "My config or other commit message"
Run the configuration processor:
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost config-processor-run.yml
Update your deployment directory:
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost ready-deployment.yml
Run the following playbook to complete the configuration:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts cinder-reconfigure.yml
You must modify the cinder.conf.j2
to configure the iSCSI
details.
Perform the following steps to configure 3PAR iSCSI as Cinder backend:
Log in to Cloud Lifecycle Manager.
Make the following changes to the
~/openstack/my_cloud/config/cinder/cinder.conf.j2
file:
Add your 3PAR backend to the enabled_backends
section:
# Configure the enabled backends enabled_backends=3par_iSCSI
Uncomment the StoreServ (3par) iscsi cluster
section
and fill the values per your cluster information. Here is an example:
[3par_iSCSI] san_ip: <3par-san-ipaddr> san_login: <3par-san-username> san_password: <3par-san-password> hpe3par_username: <3par-username> hpe3par_password: <hpe3par_password> hpe3par_api_url: https://<3par-san-ipaddr>:8080/api/v1 hpe3par_cpg: <3par-cpg-name-1>[,<3par-cpg-name-2>, ...] volume_backend_name: <3par-backend-name> volume_driver: cinder.volume.drivers.san.hp.hp_3par_iscsi.hpe3parISCSIDriver hpe3par_iscsi_ips: <3par-ip-address-1>[,<3par-ip-address-2>,<3par-ip-address-3>, ...] hpe3par_iscsi_chap_enabled=true
Do not use backend_host
variable in
cinder.conf
file. If backend_host
is set, it will override the [DEFAULT]/host value which HPE Helion OpenStack 8
is dependent on.
Commit your configuration your local git repository:
cd ~/openstack/ardana/ansible git add -A git commit -m "<commit message>"
Run the configuration processor:
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost config-processor-run.yml
When you run the configuration processor you will be prompted for two passwords. Enter the first password to make the configuration processor encrypt its sensitive data, which consists of the random inter-service passwords that it generates and the Ansible group_vars and host_vars that it produces for subsequent deploy runs. You will need this key for subsequent Ansible deploy runs and subsequent configuration processor runs. If you wish to change an encryption password that you have already used when running the configuration processor then enter the new password at the second prompt, otherwise press Enter.
For CI purposes you can specify the required passwords on the ansible command line. For example, the command below will disable encryption by the configuration processor
ansible-playbook -i hosts/localhost config-processor-run.yml \ -e encrypt="" -e rekey=""
If you receive an error during either of these steps then there is an issue with one or more of your configuration files. We recommend that you verify that all of the information in each of your configuration files is correct for your environment and then commit those changes to git using the instructions above.
Run the following command to create a deployment directory.
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost ready-deployment.yml
Run the following command to complete the configuration:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts cinder-reconfigure.yml
After configuring 3PAR as your Block Storage backend, perform the following tasks:
Book “Operations Guide”, Chapter 7 “Managing Block Storage”, Section 7.1 “Managing Block Storage using Cinder”, Section 7.1.2 “Creating a Volume Type for your Volumes”
HPE Helion OpenStack 8 supports integration of Ironic (Baremetal) service with HPE OneView using agent_pxe_oneview driver. Please refer to OpenStack Documentation for more information.
Installed HPE Helion OpenStack 8 with entry-scale-ironic-flat-network or entry-scale-ironic-multi-tenancy model.
HPE OneView 3.0 instance is running and connected to management network.
HPE OneView configuration is set into
definition/data/ironic/ironic_config.yml
(and
ironic-reconfigure.yml
playbook ran if needed). This
should enable agent_pxe_oneview driver in ironic
conductor.
Managed node(s) should support PXE booting in legacy BIOS mode.
Managed node(s) should have PXE boot NIC listed first. That is, embedded 1Gb NIC must be disabled (otherwise it always goes first).
On the Cloud Lifecycle Manager, open the file
~/openstack/my_cloud/definition/data/ironic/ironic_config.yml
~$ cd ~/openstack vi my_cloud/definition/data/ironic/ironic_config.yml
Modify the settings listed below:
enable_oneview
: should be set to "true" for HPE OneView
integration
oneview_manager_url
: HTTPS endpoint of HPE OneView
management interface, for example:
https://10.0.0.10/
oneview_username
: HPE OneView username, for example:
Administrator
oneview_encrypted_password
: HPE OneView password in
encrypted or clear text form. The encrypted form is distinguished by
presence of @ardana@
at the beginning of the
string. The encrypted form can be created by running the
ardanaencrypt.py
program. This program is shipped as part of HPE Helion OpenStack and can be found in
~/openstack/ardana/ansible
directory on Cloud Lifecycle Manager.
oneview_allow_insecure_connections
: should be set to
"true" if HPE OneView is using self-generated certificate.
Once you have saved your changes and exited the editor, add files, commit
changes to local git repository, and run
config-processor-run.yml
and
ready-deployment.yml
playbooks, as described in
Chapter 10, Using Git for Configuration Management.
~/openstack$ git add my_cloud/definition/data/ironic/ironic_config.yml ~/openstack$ cd ardana/ansible ~/openstack/ardana/ansible$ ansible-playbook -i hosts/localhost \ config-processor-run.yml ... ~/openstack/ardana/ansible$ ansible-playbook -i hosts/localhost \ ready-deployment.yml
Run ironic-reconfigure.yml playbook.
$ cd ~/scratch/ansible/next/ardana/ansible/ # This is needed if password was encrypted in ironic_config.yml file ~/scratch/ansible/next/ardana/ansible$ export ARDANA_USER_PASSWORD_ENCRYPT_KEY=your_password_encrypt_key ~/scratch/ansible/next/ardana/ansible$ ansible-playbook -i hosts/verb_hosts ironic-reconfigure.yml ...
In the HPE OneView web interface:
Navigate to
› . Add new item, using managed node IPMI IP and credentials. If this is the first node of this type being added, corresponding will be created automatically.Navigate to
› . Add . Use corresponding to node being registered. In section, set and options must be turned on:Verify that node is powered off. Power the node off if needed.
HPE OneView does not support managing boot order for HPE DL servers in UEFI mode. Therefore, HPE DL servers can be only managed in Legacy BIOS mode .
Login to the Cloud Lifecycle Manager and source respective credentials file
(for example service.osrc
for admin account).
Review glance images with glance image list
$ glance image list +--------------------------------------+--------------------------+ | ID | Name | +--------------------------------------+--------------------------+ | c61da588-622c-4285-878f-7b86d87772da | cirros-0.3.4-x86_64 | +--------------------------------------+--------------------------+
Ironic deploy images (boot image,
ir-deploy-kernel
, ir-deploy-ramdisk
,
ir-deploy-iso
) are created automatically. The
agent_pxe_oneview
Ironic driver requires
ir-deploy-kernel
and
ir-deploy-ramdisk
images.
Create node using agent_pxe_oneview
driver.
$ ironic --ironic-api-version 1.22 node-create -d agent_pxe_oneview --name test-node-1 \ --network-interface neutron -p memory_mb=131072 -p cpu_arch=x86_64 -p local_gb=80 -p cpus=2 \ -p 'capabilities=boot_mode:bios,boot_option:local,server_hardware_type_uri:\ /rest/server-hardware-types/E5366BF8-7CBF-48DF-A752-8670CF780BB2,server_profile_template_uri:\ /rest/server-profile-templates/00614918-77f8-4146-a8b8-9fc276cd6ab2' \ -i 'server_hardware_uri=/rest/server-hardware/32353537-3835-584D-5135-313930373046' \ -i dynamic_allocation=True \ -i deploy_kernel=633d379d-e076-47e6-b56d-582b5b977683 \ -i deploy_ramdisk=d5828785-edf2-49fa-8de2-3ddb7f3270d5 +-------------------+--------------------------------------------------------------------------+ | Property | Value | +-------------------+--------------------------------------------------------------------------+ | chassis_uuid | | | driver | agent_pxe_oneview | | driver_info | {u'server_hardware_uri': u'/rest/server- | | | hardware/32353537-3835-584D-5135-313930373046', u'dynamic_allocation': | | | u'True', u'deploy_ramdisk': u'd5828785-edf2-49fa-8de2-3ddb7f3270d5', | | | u'deploy_kernel': u'633d379d-e076-47e6-b56d-582b5b977683'} | | extra | {} | | name | test-node-1 | | network_interface | neutron | | properties | {u'memory_mb': 131072, u'cpu_arch': u'x86_64', u'local_gb': 80, u'cpus': | | | 2, u'capabilities': | | | u'boot_mode:bios,boot_option:local,server_hardware_type_uri:/rest | | | /server-hardware-types/E5366BF8-7CBF- | | | 48DF-A752-8670CF780BB2,server_profile_template_uri:/rest/server-profile- | | | templates/00614918-77f8-4146-a8b8-9fc276cd6ab2'} | | resource_class | None | | uuid | c202309c-97e2-4c90-8ae3-d4c95afdaf06 | +-------------------+--------------------------------------------------------------------------+
For deployments created via Ironic/HPE OneView integration,
memory_mb
property must reflect physical amount of
RAM installed in the managed node. That is, for a server with 128 Gb of RAM
it works out to 132*1024=13072.
Boot mode in capabilities property must reflect boot mode used by the server, that is 'bios' for Legacy BIOS and 'uefi' for UEFI.
Values for server_hardware_type_uri
,
server_profile_template_uri
and
server_hardware_uri
can be grabbed from browser URL
field while navigating to respective objects in HPE OneView UI. URI
corresponds to the part of URL which starts form the token
/rest
.
That is, the URL
https://oneview.mycorp.net/#/profile-templates/show/overview/r/rest/server-profile-templates/12345678-90ab-cdef-0123-012345678901
corresponds to the URI
/rest/server-profile-templates/12345678-90ab-cdef-0123-012345678901
.
Grab IDs of deploy_kernel
and
deploy_ramdisk
from glance
image list output above.
Create port.
$ ironic --ironic-api-version 1.22 port-create \ --address aa:bb:cc:dd:ee:ff \ --node c202309c-97e2-4c90-8ae3-d4c95afdaf06 \ -l switch_id=ff:ee:dd:cc:bb:aa \ -l switch_info=MY_SWITCH \ -l port_id="Ten-GigabitEthernet 1/0/1" \ --pxe-enabled true +-----------------------+----------------------------------------------------------------+ | Property | Value | +-----------------------+----------------------------------------------------------------+ | address | 8c:dc:d4:b5:7d:1c | | extra | {} | | local_link_connection | {u'switch_info': u'C20DATA', u'port_id': u'Ten-GigabitEthernet | | | 1/0/1', u'switch_id': u'ff:ee:dd:cc:bb:aa'} | | node_uuid | c202309c-97e2-4c90-8ae3-d4c95afdaf06 | | pxe_enabled | True | | uuid | 75b150ef-8220-4e97-ac62-d15548dc8ebe | +-----------------------+----------------------------------------------------------------+
Ironic Multi-Tenancy networking model is used in this example. Therefore, ironic port-create command contains information about the physical switch. HPE OneView integration can also be performed using the Ironic Flat Networking model. For more information, see Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 9 “Example Configurations”, Section 9.6 “Ironic Examples”.
Move node to manageable provisioning state. The connectivity between Ironic and HPE OneView will be verified, Server Hardware Template settings validated, and Server Hardware power status retrieved from HPE OneView and set into the Ironic node.
$ ironic node-set-provision-state test-node-1 manage
Verify that node power status is populated.
$ ironic node-show test-node-1 +-----------------------+-----------------------------------------------------------------------+ | Property | Value | +-----------------------+-----------------------------------------------------------------------+ | chassis_uuid | | | clean_step | {} | | console_enabled | False | | created_at | 2017-06-30T21:00:26+00:00 | | driver | agent_pxe_oneview | | driver_info | {u'server_hardware_uri': u'/rest/server- | | | hardware/32353537-3835-584D-5135-313930373046', u'dynamic_allocation':| | | u'True', u'deploy_ramdisk': u'd5828785-edf2-49fa-8de2-3ddb7f3270d5', | | | u'deploy_kernel': u'633d379d-e076-47e6-b56d-582b5b977683'} | | driver_internal_info | {} | | extra | {} | | inspection_finished_at| None | | inspection_started_at | None | | instance_info | {} | | instance_uuid | None | | last_error | None | | maintenance | False | | maintenance_reason | None | | name | test-node-1 | | network_interface | | | power_state | power off | | properties | {u'memory_mb': 131072, u'cpu_arch': u'x86_64', u'local_gb': 80, | | | u'cpus': 2, u'capabilities': | | | u'boot_mode:bios,boot_option:local,server_hardware_type_uri:/rest | | | /server-hardware-types/E5366BF8-7CBF- | | | 48DF-A752-86...BB2,server_profile_template_uri:/rest/server-profile- | | | templates/00614918-77f8-4146-a8b8-9fc276cd6ab2'} | | provision_state | manageable | | provision_updated_at | 2017-06-30T21:04:43+00:00 | | raid_config | | | reservation | None | | resource_class | | | target_power_state | None | | target_provision_state| None | | target_raid_config | | | updated_at | 2017-06-30T21:04:43+00:00 | | uuid | c202309c-97e2-4c90-8ae3-d4c95afdaf06 | +-----------------------+-----------------------------------------------------------------------+
Move node to available provisioning state. The Ironic node will be reported to Nova as available.
$ ironic node-set-provision-state test-node-1 provide
Verify that node resources were added to Nova hypervisor stats.
$ nova hypervisor-stats +----------------------+--------+ | Property | Value | +----------------------+--------+ | count | 1 | | current_workload | 0 | | disk_available_least | 80 | | free_disk_gb | 80 | | free_ram_mb | 131072 | | local_gb | 80 | | local_gb_used | 0 | | memory_mb | 131072 | | memory_mb_used | 0 | | running_vms | 0 | | vcpus | 2 | | vcpus_used | 0 | +----------------------+--------+
Create Nova flavor.
$ nova flavor-create m1.ironic auto 131072 80 2 +-------------+-----------+--------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Mem_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +-------------+-----------+--------+------+-----------+------+-------+-------------+-----------+ | 33c8...f8d8 | m1.ironic | 131072 | 80 | 0 | | 2 | 1.0 | True | +-------------+-----------+--------+------+-----------+------+-------+-------------+-----------+ $ nova flavor-key m1.ironic set capabilities:boot_mode="bios" $ nova flavor-key m1.ironic set capabilities:boot_option="local" $ nova flavor-key m1.ironic set cpu_arch=x86_64
All parameters (specifically, amount of RAM and boot mode) must correspond to ironic node parameters.
Create Nova keypair if needed.
$ nova keypair-add ironic_kp --pub-key ~/.ssh/id_rsa.pub
Boot Nova instance.
$ nova boot --flavor m1.ironic --image d6b5...e942 --key-name ironic_kp \ --nic net-id=5f36...dcf3 test-node-1 +-------------------------------+-----------------------------------------------------+ | Property | Value | +-------------------------------+-----------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR: | | | hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | pE3m7wRACvYy | | config_drive | | | created | 2017-06-30T21:08:42Z | | flavor | m1.ironic (33c81884-b8aa-46...3b72f8d8) | | hostId | | | id | b47c9f2a-e88e-411a-abcd-6172aea45397 | | image | Ubuntu Trusty 14.04 BIOS (d6b5d971-42...5f2d88e942) | | key_name | ironic_kp | | metadata | {} | | name | test-node-1 | | os-extended-volumes: | | | volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | c8573f7026d24093b40c769ca238fddc | | updated | 2017-06-30T21:08:42Z | | user_id | 2eae99221545466d8f175eeb566cc1b4 | +-------------------------------+-----------------------------------------------------+
During nova instance boot, the following operations will be performed by Ironic via HPE OneView REST API.
In HPE OneView, new Server Profile is generated for specified Server Hardware, using specified Server Profile Template. Boot order in Server Profile is set to list PXE as the first boot source.
The managed node is powered on and boots IPA image from PXE.
IPA image writes user image onto disk and reports success back to Ironic.
Ironic modifies Server Profile in HPE OneView to list 'Disk' as default boot option.
Ironic reboots the node (via HPE OneView REST API call).
The current version of HPE Helion OpenStack supports integration with SUSE Enterprise Storage (SES). Integrating SUSE Enterprise Storage enables Ceph block storage as well as object and image storage services in HPE Helion OpenStack.
The SUSE Enterprise Storage integration is provided through the ardana-ses
RPM package. This package is included in the
patterns-cloud-ardana
pattern, its installation is
covered in Chapter 3, Installing the Cloud Lifecycle Manager server. The update repositories and
the installation covered there are required to support SUSE Enterprise Storage integration.
The latest updates should be applied before proceeding.
After the SUSE Enterprise Storage integration package has been installed, it must be
configured. Files that contain relevant SUSE Enterprise Storage/Ceph deployment information
must be placed into a directory on the deployer node. This includes the
configuration file that describes various aspects of the Ceph environment
as well as keyrings for each user and pool created in the Ceph
environment. In addition to that, you need to edit the
settings.yml
file to enable the SUSE Enterprise Storage integration to
run and update all of the HPE Helion OpenStack service configuration files.
The settings.yml
file resides in the
~/openstack/my_cloud/config/ses/
directory. Open the
file for editing, uncomment the ses_config_path:
parameter to specify the location on the deployer host containing the
ses_config.yml
file and all Ceph keyring files. For example:
# the directory where the ses config files are. ses_config_path: /var/lib/ardana/openstack/my_cloud/config/ses/ ses_config_file: ses_config.yml # Allow nova libvirt images_type to be set to rbd? # Set this to false, if you only want rbd_user and rbd_secret to be set # in the [libvirt] section of hypervisor.conf ses_nova_set_images_type: True # The unique uuid for use with virsh for cinder and nova ses_secret_id: 457eb676-33da-42ec-9a8c-9293d545c337
If you are integrating with SUSE Enterprise Storage and do not want to store Nova images
in Ceph, change the line in settings.yml
from
ses_nova_set_images_type: True
to
ses_nova_set_images_type: False
For security reasons, you should use a unique UUID in the
settings.yml
file for
ses_secret_id
, replacing the fixed, hard-coded UUID in
that file. You can generate a UUID that will be unique to your deployment
using the command uuidgen
.
For SES deployments that have version 5.5 and higher, there is a Salt runner
that can create all the users, keyrings, and pools. It will also generate a yaml
configuration that is needed to integrate with SUSE OpenStack Cloud. The integration
runner will create the cinder
, cinder-backup
, and glance
Ceph users. Both the Cinder and Nova services will have the
same user, as Cinder needs access to create objects that Nova
uses.
Login in as root to run the SES 5.5 Salt runner on the salt admin host.
root #
salt-run --out=yaml openstack.integrate prefix=mycloud
The prefix parameter allows pools to be created with the specified prefix. In this way, multiple cloud deployments can use different users and pools on the same SES deployment.
The sample yaml output:
ceph_conf: cluster_network: 10.84.56.0/21 fsid: d5d7c7cb-5858-3218-a36f-d028df7b0673 mon_host: 10.84.56.8, 10.84.56.9, 10.84.56.7 mon_initial_members: ses-osd1, ses-osd2, ses-osd3 public_network: 10.84.56.0/21 cinder: key: AQCdfIRaxefEMxAAW4zp2My/5HjoST2Y8mJg8A== rbd_store_pool: mycloud-cinder rbd_store_user: cinder cinder-backup: key: AQBb8hdbrY2bNRAAqJC2ZzR5Q4yrionh7V5PkQ== rbd_store_pool: mycloud-backups rbd_store_user: cinder-backup glance: key: AQD9eYRachg1NxAAiT6Hw/xYDA1vwSWLItLpgA== rbd_store_pool: mycloud-glance rbd_store_user: glance nova: rbd_store_pool: mycloud-nova radosgw_urls: - http://10.84.56.7:80/swift/v1 - http://10.84.56.8:80/swift/v1
After you have run the openstack.integrate
runner,
copy the yaml output into the new ses_config.yml
file, and save this file in the path specified in the
settings.yml
file on the deployer node. In this
case, the file ses_config.yml
must be saved in the
/var/lib/ardana/openstack/my_cloud/config/ses/
directory on the deployer node.
For SUSE Enterprise Storage/Ceph deployments that have version older than 5.5, the following applies. For Ceph, it is necessary to create pools and users to allow the HPE Helion OpenStack services to use the SUSE Enterprise Storage/Ceph cluster. Pools and users must be created for the Cinder, Cinder backup, and Glance services. Both the Cinder and Nova services must have the same user, as Cinder needs access to create objects that Nova uses. Instructions for creating and managing pools, users and keyrings is covered in the SUSE Enterprise Storage documentation under Key Management.
Example of ses_config.yml
:
ses_cluster_configuration: ses_cluster_name: ceph ses_radosgw_url: "https://192.168.56.8:8080/swift/v1" conf_options: ses_fsid: d5d7c7cb-5858-3218-a36f-d028df7b1111 ses_mon_initial_members: ses-osd2, ses-osd3, ses-osd1 ses_mon_host: 192.168.56.8, 192.168.56.9, 192.168.56.7 ses_public_network: 192.168.56.0/21 ses_cluster_network: 192.168.56.0/21 cinder: rbd_store_pool: cinder rbd_store_pool_user: cinder keyring_file_name: ceph.client.cinder.keyring cinder-backup: rbd_store_pool: backups rbd_store_pool_user: cinder_backup keyring_file_name: ceph.client.cinder-backup.keyring # Nova uses the cinder user to access the nova pool, cinder pool # So all we need here is the nova pool name. nova: rbd_store_pool: nova glance: rbd_store_pool: glance rbd_store_pool_user: glance keyring_file_name: ceph.client.glance.keyring
Example contents of the directory specified in settings.yml
file:
ardana > ~/openstack/my_cloud/config/ses> ls -al ceph.client.cinder-backup.keyring ceph.client.cinder.keyring ceph.client.glance.keyring ses_config.yml
Modify the glance_default_store
option in ~/openstack/my_cloud/definition/data/control_plane.yml
:
. . - rabbitmq # - glance-api - glance-api: glance_default_store: 'rbd' - glance-registry - glance-client
Commit your configuration to your local git repo:
ardana >
cd ~/openstack/ardana/ansibleardana >
git add -Aardana >
git commit -m "add SES integration"
Run the configuration processor.
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost config-processor-run.yml
Create a deployment directory.
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost ready-deployment.yml
Run a series of reconfiguration playbooks.
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts ses-deploy.ymlardana >
ansible-playbook -i hosts/verb_hosts cinder-reconfigure.ymlardana >
ansible-playbook -i hosts/verb_hosts glance-reconfigure.ymlardana >
ansible-playbook -i hosts/verb_hosts nova-reconfigure.yml
Configuring SUSE Enterprise Storage for Integration with RADOS Gateway
RADOS gateway integration can be enabled (disabled) by adding (removing)
the following line in the ses_config.yml
:
ses_radosgw_url: "https://192.168.56.8:8080/swift/v1"
If RADOS gateway integration is enabled, additional SUSE Enterprise Storage configuration is
needed. RADOS gateway must be configured to use Keystone for
authentication. This is done by adding the configuration statements below to
the rados section of ceph.conf
on the RADOS node.
[client.rgw.HOSTNAME] rgw frontends = "civetweb port=80+443s" rgw enable usage log = true rgw keystone url = KEYSTONE_ENDPOINT (for example: https://192.168.24.204:5000) rgw keystone admin user = KEYSTONE_ADMIN_USER rgw keystone admin password = KEYSTONE_ADMIN_PASSWORD rgw keystone admin project = KEYSTONE_ADMIN_PROJECT rgw keystone admin domain = KEYSTONE_ADMIN_DOMAIN rgw keystone api version = 3 rgw keystone accepted roles = admin,Member,_member_ rgw keystone accepted admin roles = admin rgw keystone revocation interval = 0 rgw keystone verify ssl = false # If keystone is using self-signed certificate
After making these changes to ceph.conf
, the RADOS
gateway service needs to be restarted.
Enabling RADOS gateway replaces the existing Object Storage endpoint with the RADOS gateway endpoint.
Enabling HTTPS, Creating and Importing a Certificate
SUSE Enterprise Storage integration uses the HTTPS protocol to connect to the RADOS gateway. However, with SUSE Enterprise Storage 5, HTTPS is not enabled by default. To enable the gateway role to communicate securely using SSL, you need to either have a CA-issued certificate or create a self-signed one. Instructions for both are available in the SUSE Enterprise Storage documentation.
The certificate needs to be installed on your Cloud Lifecycle Manager. On the Cloud Lifecycle Manager, copy the
cert to /tmp/ardana_tls_cacerts
. Then deploy it.
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts tls-trust-deploy.ymlardana >
ansible-playbook -i hosts/verb_hosts tls-reconfigure.yml
When creating the certificate, the subjectAltName
must
match the ses_radosgw_url
entry in
ses_config.yml
. Either an IP address or FQDN can be
used, but these values must be the same in both places.
The following steps will deploy your configuration.
Commit your configuration to your local git repo.
ardana >
cd ~/openstack/ardana/ansibleardana >
git add -Aardana >
git commit -m "add SES integration"
Run the configuration processor.
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost config-processor-run.yml
Create a deployment directory.
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost ready-deployment.yml
Run a series of reconfiguration playbooks.
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts ses-deploy.ymlardana >
ansible-playbook -i hosts/verb_hosts cinder-reconfigure.ymlardana >
ansible-playbook -i hosts/verb_hosts glance-reconfigure.ymlardana >
ansible-playbook -i hosts/verb_hosts nova-reconfigure.yml
Reconfigure the Cloud Lifecycle Manager to complete the deployment.
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts ardana-reconfigure.yml
Due to a security issue described in http://docs.ceph.com/docs/master/rbd/rbd-openstack/?highlight=uuid#enable-copy-on-write-cloning-of-images, we do not recommend the copy-on-write cloning of images when Glance and Cinder are both using a Ceph back-end. However, if you want to use this feature for faster operation, you can enable it as follows.
Open the
~/openstack/my_cloud/config/glance/glance-api.conf.j2
file for editing and add show_image_direct_url = True
under the [DEFAULT]
section.
Commit changes:
git add -A git commit -m "Enable Copy-on-Write Cloning"
Run the required playbooks:
ardana >
ansible-playbook -i hosts/localhost config-processor-run.ymlardana >
ansible-playbook -i hosts/localhost ready-deployment.yml cd /var/lib/ardana/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts glance-reconfigure.yml
Note that this exposes the back-end location via Glance's API, so the end-point should not be publicly accessible when Copy-On-Write image cloning is enabled.
SUSE Enterprise Storage performance can be improved with Image-Volume cache. Be aware that Image-Volume cache and Copy-on-Write cloning cannot be used for the same storage back-end. For more information, see the OpenStack documentation.
Enable Image-Volume cache with the following steps:
Open the
~/openstack/my_cloud/config/cinder/cinder.conf.j2
file for editing.
Add image_volume_cache_enabled = True
option under the
[ses_ceph]
section.
Commit changes:
ardana >
git add -Aardana >
git commit -m "Enable Image-Volume cache"
Run the required playbooks:
ardana >
ansible-playbook -i hosts/localhost config-processor-run.ymlardana >
ansible-playbook -i hosts/localhost ready-deployment.ymlardana >
cd /var/lib/ardana/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts cinder-reconfigure.yml