This section contains tutorials for common tasks for your HPE Helion OpenStack 8 cloud.
This document provides simplified instructions for installing and setting up a HPE Helion OpenStack. Use this quickstart guide to build testing, demonstration, and lab-type environments., rather than production installations. When you complete this quickstart process, you will have a fully functioning HPE Helion OpenStack demo environment.
These simplified instructions are intended for testing or demonstration. Instructions for production installations are in Book “Installing with Cloud Lifecycle Manager”.
The following are short descriptions of the components that HPE Helion OpenStack employs when installing and deploying your cloud.
Ansible. Ansible is a powerful configuration management tool used by HPE Helion OpenStack to manage nearly all aspects of your cloud infrastructure. Most commands in this quickstart guide execute Ansible scripts, known as playbooks. You will run playbooks that install packages, edit configuration files, manage network settings, and take care of the general administration tasks required to get your cloud up and running.
Get more information on Ansible at https://www.ansible.com/.
Cobbler. Cobbler is another third-party tool used by HPE Helion OpenStack to deploy operating systems across the physical servers that make up your cloud. Find more info at http://cobbler.github.io/.
Git. Git is the version control system used to manage the configuration files that define your cloud. Any changes made to your cloud configuration files must be committed to the locally hosted git repository to take effect. Read more information on Git at https://git-scm.com/.
Successfully deploying a HPE Helion OpenStack environment is a large endeavor, but it is not complicated. For a successful deployment, you must put a number of components in place before rolling out your cloud. Most importantly, a basic HPE Helion OpenStack requires the proper network infrastrucure. Because HPE Helion OpenStack segregates the network traffic of many of its elements, if the necessary networks, routes, and firewall access rules are not in place, communication required for a successful deployment will not occur.
When your network infrastructure is in place, go ahead and set up the Cloud Lifecycle Manager. This is the server that will orchestrate the deployment of the rest of your cloud. It is also the server you will run most of your deployment and management commands on.
Set up the Cloud Lifecycle Manager
Download the installation media
Obtain a copy of the HPE Helion OpenStack installation media, and make sure that it is accessible by the server that you are installing it on. Your method of doing this may vary. For instance, some may choose to load the installation ISO on a USB drive and physically attach it to the server, while others may run the IPMI Remote Console and attach the ISO to a virtual disc drive.
Install the operating system
Boot your server, using the installation media as the boot source.
Choose "install" from the list of options and choose your preferred keyboard layout, location, language, and other settings.
Set the address, netmask, and gateway for the primary network interface.
Create a root user account.
Proceed with the OS installation. After the installation is complete and the server has rebooted into the new OS, log in with the user account you created.
Configure the new server
SSH to your new server, and set a valid DNS nameserver in the
/etc/resolv.conf
file.
Set the environment variable LC_ALL
:
export LC_ALL=C
You now have a server running SUSE Linux Enterprise Server (SLES). The next step is to configure this machine as a Cloud Lifecycle Manager.
Configure the Cloud Lifecycle Manager
The installation media you used to install the OS on the server also has the files that will configure your cloud. You need to mount this installation media on your new server in order to use these files.
Using the URL that you obtained the HPE Helion OpenStack installation media from,
run wget
to download the ISO file to your server:
wget INSTALLATION_ISO_URL
Now mount the ISO in the /media/cdrom/
directory
sudo mount INSTALLATION_ISO /media/cdrom/
Unpack the tar file found in the
/media/cdrom/ardana/
directory where you just
mounted the ISO:
tar xvf /media/cdrom/ardana/ardana-x.x.x-x.tar
Now you will install and configure all the components needed to turn this
server into a Cloud Lifecycle Manager. Run the ardana-init.bash
script from the uncompressed tar file:
~/ardana-x.x.x/ardana-init.bash
The ardana-init.bash
script prompts you to enter an
optional SSH passphrase. This passphrase protects the RSA key used to
SSH to the other cloud nodes. This is an optional passphrase, and you
can skip it by pressing Enter at the prompt.
The ardana-init.bash
script automatically installs
and configures everything needed to set up this server as the lifecycle
manager for your cloud.
When the script has finished running, you can proceed to the next step, editing your input files.
Edit your input files
Your HPE Helion OpenStack input files are where you define your cloud infrastructure and how it runs. The input files define options such as which servers are included in your cloud, the type of disks the servers use, and their network configuration. The input files also define which services your cloud will provide and use, the network architecture, and the storage backends for your cloud.
There are several example configurations, which you can find on your Cloud Lifecycle Manager
in the ~/openstack/examples/
directory.
The simplest way to set up your cloud is to copy the contents of one of
these example configurations to your
~/openstack/mycloud/definition/
directory. You can
then edit the copied files and define your cloud.
cp -r ~/openstack/examples/CHOSEN_EXAMPLE/* ~/openstack/my_cloud/definition/
Edit the files in your
~/openstack/my_cloud/definition/
directory to
define your cloud.
Commit your changes
When you finish editing the necessary input files, stage them, and then commit the changes to the local Git repository:
cd ~/openstack/ardana/ansible git add -A git commit -m "My commit message"
Image your servers
Now that you have finished editing your input files, you can deploy the configuration to the servers that will comprise your cloud.
Image the servers. You will install the SLES operating system across all the servers in your cloud, using Ansible playbooks to trigger the process.
The following playbook confirms that your servers are accessible over their IPMI ports, which is a prerequisite for the imaging process:
ansible-playbook -i hosts/localhost bm-power-status.yml
Now validate that your cloud configuration files have proper YAML syntax
by running the config-processor-run.yml
playbook:
ansible-playbook -i hosts/localhost config-processor-run.yml
If you receive an error when running the preceeding playbook, one or
more of your configuration files has an issue. Refer to the output of
the Ansible playbook, and look for clues in the Ansible log file, found
at ~/.ansible/ansible.log
.
The next step is to prepare your imaging system, Cobbler, to deploy operating systems to all your cloud nodes:
ansible-playbook -i hosts/localhost cobbler-deploy.yml
Now you can image your cloud nodes. You will use an Ansible playbook to trigger Cobbler to deploy operating systems to all the nodes you specified in your input files:
ansible-playbook -i hosts/localhost bm-reimage.yml
The bm-reimage.yml
playbook performs the following
operations:
Powers down the servers.
Sets the servers to boot from a network interface.
Powers on the servers and performs a PXE OS installation.
Waits for the servers to power themselves down as part of a successful OS installation. This can take some time.
Sets the servers to boot from their local hard disks and powers on the servers.
Waits for the SSH service to start on the servers and verifies that they have the expected host-key signature.
Deploy your cloud
Now that your servers are running the SLES operating system, it is time to configure them for the roles they will play in your new cloud.
Prepare the Cloud Lifecycle Manager to deploy your cloud configuration to all the nodes:
ansible-playbook -i hosts/localhost ready-deployment.yml
NOTE: The preceding playbook creates a new directory,
~/scratch/ansible/next/ardana/ansible/
, from which
you will run many of the following commands.
(Optional)
If you are reusing servers or disks to run your cloud, you can wipe the
disks of your newly imaged servers by running the
wipe_disks.yml
playbook:
cd ~/scratch/ansible/next/ardana/ansible/ ansible-playbook -i hosts/verb_hosts wipe_disks.yml
The wipe_disks.yml
playbook removes any existing
data from the drives on your new servers. This can be helpful if you are
reusing servers or disks. This action will not affect the OS partitions
on the servers.
The wipe_disks.yml
playbook is only meant to be
run on systems immediately after running
bm-reimage.yml
. If used for any other case, it
may not wipe all of the expected partitions. For example, if
site.yml
fails, you cannot start fresh by running
wipe_disks.yml
. You must
bm-reimage
the node first and then run
wipe_disks
.
Now it is time to deploy your cloud. Do this by running the
site.yml
playbook, which pushes the configuration
you defined in the input files out to all the servers that will host
your cloud.
cd ~/scratch/ansible/next/ardana/ansible/ ansible-playbook -i hosts/verb_hosts site.yml
The site.yml
playbook installs packages, starts
services, configures network interface settings, sets iptables firewall
rules, and more. Upon successful completion of this playbook, your
HPE Helion OpenStack will be in place and in a running state. This playbook can take
up to six hours to complete.
SSH to your nodes
Now that you have successfully run site.yml
, your cloud
will be up and running. You can verify connectivity to your nodes by
connecting to each one by using SSH. You can find the IP addresses of your
nodes by viewing the /etc/hosts
file.
For security reasons, you can only SSH to your nodes from the Cloud Lifecycle Manager. SSH connections from any machine other than the Cloud Lifecycle Manager will be refused by the nodes.
From the Cloud Lifecycle Manager, SSH to your nodes:
ssh <management IP address of node>
Also note that SSH is limited to your cloud's management network. Each
node has an address on the management network, and you can find this
address by reading the /etc/hosts
or
server_info.yml
file.
HPE Helion OpenStack uses the ELK (Elasticsearch, Logstash, Kibana) stack for log management across the entire cloud infrastructure. This configuration facilitates simple administration as well as integration with third-party tools. This tutorial covers how to forward your logs to a third-party tool or service, and how to access and search the Elasticsearch log stores through API endpoints.
The ELK logging stack consists of the Elasticsearch, Logstash, and Kibana elements:
Logstash. Logstash reads the log data from the services running on your servers, and then aggregates and ships that data to a storage location. By default, Logstash sends the data to the Elasticsearch indexes, but it can also be configured to send data to other storage and indexing tools such as Splunk.
Elasticsearch. Elasticsearch is the storage and indexing component of the ELK stack. It stores and indexes the data received from Logstash. Indexing makes your log data searchable by tools designed for querying and analyzing massive sets of data. You can query the Elasticsearch datasets from the built-in Kibana console, a third-party data analysis tool, or through the Elasticsearch API (covered later).
Kibana. Kibana provides a simple and easy-to-use method for searching, analyzing, and visualizing the log data stored in the Elasticsearch indexes. You can customize the Kibana console to provide graphs, charts, and other visualizations of your log data.
You can query the Elasticsearch indexes through various language-specific
APIs, as well as directly over the IP address and port that Elasticsearch
exposes on your implementation. By default, Elasticsearch presents from
localhost, port 9200. You can run queries directly from a terminal using
curl
. For example:
curl -XGET 'http://localhost:9200/_search?q=tag:yourSearchTag'
The preceding command searches all indexes for all data with the "yourSearchTag" tag.
You can also use the Elasticsearch API from outside the logging node. This method connects over the Kibana VIP address, port 5601, using basic http authentication. For example, you can use the following command to perform the same search as the preceding search:
curl -u kibana:<password> kibana_vip:5601/_search?q=tag:yourSearchTag
You can further refine your search to a specific index of data, in this case the "elasticsearch" index:
curl -XGET 'http://localhost:9200/elasticsearch/_search?q=tag:yourSearchTag'
The search API is RESTful, so responses are provided in JSON format. Here's a sample (though empty) response:
{ "took":13, "timed_out":false, "_shards":{ "total":45, "successful":45, "failed":0 }, "hits":{ "total":0, "max_score":null, "hits":[] } }
You can find more detailed Elasticsearch API documentation at https://www.elastic.co/guide/en/elasticsearch/reference/current/search.html.
Review the Elasticsearch Python API documentation at the following sources: http://elasticsearch-py.readthedocs.io/en/master/api.html
Read the Elasticsearch Java API documentation at https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/index.html.
You can configure Logstash to ship your logs to an outside storage and indexing system, such as Splunk. Setting up this configuration is as simple as editing a few configuration files, and then running the Ansible playbooks that implement the changes. Here are the steps.
Begin by logging in to the Cloud Lifecycle Manager.
Verify that the logging system is up and running:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts logging-status.yml
When the preceding playbook completes without error, proceed to the next step.
Edit the Logstash configuration file, found at the following location:
~/openstack/ardana/ansible/roles/logging-server/templates/logstash.conf.j2
Near the end of the Logstash configuration file, you will find a section for configuring Logstash output destinations. The following example demonstrates the changes necessary to forward your logs to an outside server (changes in bold). The configuration block sets up a TCP connection to the destination server's IP address over port 5514.
# Logstash outputs
output {
# Configure Elasticsearch output
# http://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html
elasticsearch {
index => "%{[@metadata][es_index]}
hosts => ["{{ elasticsearch_http_host }}:{{ elasticsearch_http_port }}"]
flush_size => {{ logstash_flush_size }}
idle_flush_time => 5
workers => {{ logstash_threads }}
}
# Forward Logs to Splunk on TCP port 5514 which matches the one specified in Splunk Web UI.
tcp {
mode => "client"
host => "<Enter Destination listener IP address>"
port => 5514
}
}
Note that Logstash can forward log data to multiple sources, so there is no need to remove or alter the Elasticsearch section in the preceding file. However, if you choose to stop forwarding your log data to Elasticsearch, you can do so by removing the related section in this file, and then continue with the following steps.
Commit your changes to the local git repository:
cd ~/openstack/ardana/ansible git add -A git commit -m "Your commit message"
Run the configuration processor to check the status of all configuration files:
ansible-playbook -i hosts/localhost config-processor-run.yml
Run the ready-deployment playbook:
ansible-playbook -i hosts/localhost ready-deployment.yml
Implement the changes to the Logstash configuration file:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts logging-server-configure.yml
Please note that configuring the receiving service will vary from product to product. Consult the documentation for your particular product for instructions on how to set it up to receive log files from Logstash.
The HPE Helion OpenStack 8 logging solution provides a flexible and extensible framework to centralize the collection and processing of logs from all nodes in your cloud. The logs are shipped to a highly available and fault-tolerant cluster where they are transformed and stored for better searching and reporting. The HPE Helion OpenStack 8 logging solution uses the ELK stack (Elasticsearch, Logstash and Kibana) as a production-grade implementation and can support other storage and indexing technologies.
You can configure Logstash, the service that aggregates and forwards the logs to a searchable index, to send the logs to a third-party target, such as Splunk.
For how to integrate the HPE Helion OpenStack 8 centralized logging solution with Splunk, including the steps to set up and forward logs, please refer to Section 3.1, “Splunk Integration”.
You can configure your HPE Helion OpenStack cloud to work with an outside user authentication source such as Active Directory or OpenLDAP. Keystone, the HPE Helion OpenStack identity service, functions as the first stop for any user authorization/authentication requests. Keystone can also function as a proxy for user account authentication, passing along authentication and authorization requests to any LDAP-enabled system that has been configured as an outside source. This type of integration lets you use an existing user-management system such as Active Directory and its powerful group-based organization features as a source for permissions in HPE Helion OpenStack.
Upon successful completion of this tutorial, your cloud will refer user authentication requests to an outside LDAP-enabled directory system, such as Microsoft Active Directory or OpenLDAP.
To configure your HPE Helion OpenStack cloud to use an outside user-management source, perform the following steps:
Make sure that the LDAP-enabled system you plan to integrate with is up and running and accessible over the necessary ports from your cloud management network.
Edit the
/var/lib/ardana/openstack/my_cloud/config/keystone/keystone.conf.j2
file and set the following options:
domain_specific_drivers_enabled = True domain_configurations_from_database = False
Create a YAML file in the
/var/lib/ardana/openstack/my_cloud/config/keystone/
directory that defines your LDAP connection. You can make a copy of the
sample Keystone-LDAP configuration file, and then edit that file with the
details of your LDAP connection.
The following example copies the
keystone_configure_ldap_sample.yml
file and names the
new file keystone_configure_ldap_my.yml
:
ardana >
cp /var/lib/ardana/openstack/my_cloud/config/keystone/keystone_configure_ldap_sample.yml \
/var/lib/ardana/openstack/my_cloud/config/keystone/keystone_configure_ldap_my.yml
Edit the new file to define the connection to your LDAP source. This guide
does not provide comprehensive information on all aspects of the
keystone_configure_ldap.yml
file. Find a complete list
of Keystone/LDAP configuration file options at:
https://github.com/openstack/keystone/blob/stable/pike/etc/keystone.conf.sample
The following file illustrates an example Keystone configuration that is customized for an Active Directory connection.
keystone_domainldap_conf: # CA certificates file content. # Certificates are stored in Base64 PEM format. This may be entire LDAP server # certificate (in case of self-signed certificates), certificate of authority # which issued LDAP server certificate, or a full certificate chain (Root CA # certificate, intermediate CA certificate(s), issuer certificate). # cert_settings: cacert: | -----BEGIN CERTIFICATE----- certificate appears here -----END CERTIFICATE----- # A domain will be created in MariaDB with this name, and associated with ldap back end. # Installer will also generate a config file named /etc/keystone/domains/keystone.<domain_name>.conf # domain_settings: name: ad description: Dedicated domain for ad users conf_settings: identity: driver: ldap # For a full list and description of ldap configuration options, please refer to # http://docs.openstack.org/liberty/config-reference/content/keystone-configuration-file.html. # # Please note: # 1. LDAP configuration is read-only. Configuration which performs write operations (i.e. creates users, groups, etc) # is not supported at the moment. # 2. LDAP is only supported for identity operations (reading users and groups from LDAP). Assignment # operations with LDAP (i.e. managing roles, projects) are not supported. # 3. LDAP is configured as non-default domain. Configuring LDAP as a default domain is not supported. # ldap: url: ldap://YOUR_COMPANY_AD_URL suffix: YOUR_COMPANY_DC query_scope: sub user_tree_dn: CN=Users,YOUR_COMPANY_DC user : CN=admin,CN=Users,YOUR_COMPANY_DC password: REDACTED user_objectclass: user user_id_attribute: cn user_name_attribute: cn group_tree_dn: CN=Users,YOUR_COMPANY_DC group_objectclass: group group_id_attribute: cn group_name_attribute: cn use_pool: True user_enabled_attribute: userAccountControl user_enabled_mask: 2 user_enabled_default: 512 use_tls: True tls_req_cert: demand # if you are configuring multiple LDAP domains, and LDAP server certificates are issued # by different authorities, make sure that you place certs for all the LDAP backend domains in the # cacert parameter as seen in this sample yml file so that all the certs are combined in a single CA file # and every LDAP domain configuration points to the combined CA file. # Note: # 1. Please be advised that every time a new ldap domain is configured, the single CA file gets overwritten # and hence ensure that you place certs for all the LDAP backend domains in the cacert parameter. # 2. There is a known issue on one cert per CA file per domain when the system processes # concurrent requests to multiple LDAP domains. Using the single CA file with all certs combined # shall get the system working properly. tls_cacertfile: /etc/keystone/ssl/certs/all_ldapdomains_ca.pem
Add your new file to the local Git repository and commit the changes.
ardana >
cd ~/openstackardana >
git checkout siteardana >
git add -Aardana >
git commit -m "Adding LDAP server integration config"
Run the configuration processor and deployment preparation playbooks to validate the YAML files and prepare the environment for configuration.
ardana >
cd ~/openstack/ardana/ansibleardana >
ansible-playbook -i hosts/localhost config-processor-run.ymlardana >
ansible-playbook -i hosts/localhost ready-deployment.yml
Run the Keystone reconfiguration playbook to implement your changes,
passing the newly created YAML file as an argument to the
-e@FILE_PATH
parameter:
ardana >
cd ~/scratch/ansible/next/ardana/ansibleardana >
ansible-playbook -i hosts/verb_hosts keystone-reconfigure.yml \ -e@/var/lib/ardana/openstack/my_cloud/config/keystone/keystone_configure_ldap_my.yml
To integrate your HPE Helion OpenStack cloud with multiple domains, repeat these steps starting from Step 3 for each domain.