The Cloud Lifecycle Manager can be installed on a Control Plane or on a stand-alone server.
Installing the Cloud Lifecycle Manager on a Control Plane is done during the process of deploying your Cloud. Your Cloud and the Cloud Lifecycle Manager are deployed together.
With a standalone Cloud Lifecycle Manager, you install the deployer first and then deploy your Cloud in a separate process. Either the Install UI or command line can be used to deploy a stand-alone Cloud Lifecycle Manager.
Each method is suited for particular needs. The best choice depends on your situation.
Stand-alone Deployer
+ Compared to a Control Plane deployer, a stand-alone deployer is easier to backup and redeploy in case of disaster
+ Separates cloud management from components being managed
+ Does not use Control Plane resources
- Another server is required (less of a disadvantage if using a VM)
- Installation may be more complex than a Control Plane Cloud Lifecycle Manager
Control Plane Deployer
+ Installation is usually simpler than installing a stand-alone deployer
+ Requires fewer servers or VMs
- Could contend with workloads for resources
- Harder to redeploy in case of failure compared to stand-alone deployer
- There is a risk to the Cloud Lifecycle Manager when updating or modifying controllers
- Runs on one of the servers that is deploying or managing your Cloud
Summary
A Control Plane Cloud Lifecycle Manager is best for small, simple Cloud deployments.
With a larger, more complex cloud, a stand-alone deployer provides better recoverability and the separation of manager from managed components.
If you do not intend to install a stand-alone deployer, proceed to installing the Cloud Lifecycle Manager on a Control Plane.
Instructions for GUI installation are in Chapter 9, Installing with the Install UI.
Instructions for installing via the command line are in Chapter 12, Installing Mid-scale and Entry-scale KVM.
Review the Chapter 2, Pre-Installation Checklist about recommended pre-installation tasks.
Prepare the Cloud Lifecycle Manager node. The Cloud Lifecycle Manager must be accessible either directly or via
ssh
, and have SUSE Linux Enterprise Server 12 SP3 installed. All nodes must
be accessible to the Cloud Lifecycle Manager. If the nodes do not have direct access to
online Cloud subscription channels, the Cloud Lifecycle Manager node will need to host the
Cloud repositories.
If you followed the installation instructions for Cloud Lifecycle Manager server (see Chapter 3, Installing the Cloud Lifecycle Manager server), HPE Helion OpenStack software should already be installed. Double-check whether SUSE Linux Enterprise and HPE Helion OpenStack are properly registered at the SUSE Customer Center by starting YaST and running › .
If you have not yet installed HPE Helion OpenStack, do so by starting YaST and
running › › . Choose
and follow the on-screen instructions. Make sure to register
HPE Helion OpenStack during the installation process and to install the software
pattern patterns-cloud-ardana
.
tux >
sudo zypper -n in patterns-cloud-ardana
Ensure the HPE Helion OpenStack media repositories and updates repositories are made available to all nodes in your deployment. This can be accomplished either by configuring the Cloud Lifecycle Manager server as an SMT mirror as described in Chapter 4, Installing and Setting Up an SMT Server on the Cloud Lifecycle Manager server (Optional) or by syncing or mounting the Cloud and updates repositories to the Cloud Lifecycle Manager server as described in Chapter 5, Software Repository Setup.
Configure passwordless sudo
for the user created when
setting up the node (as described in Section 3.4, “Creating a User”). Note that this is
not the user ardana
that will be used later in this
procedure. In the following we assume you named the user cloud
. Run the command
visudo
as user root
and add the following line
to the end of the file:
CLOUD ALL = (root) NOPASSWD:ALL
Make sure to replace CLOUD with your user name choice.
Set the password for the user
ardana
:
tux >
sudo passwd ardana
Become the user ardana
:
tux >
su - ardana
Place a copy of the SUSE Linux Enterprise Server 12 SP3 .iso
in the
ardana
home directory,
var/lib/ardana
, and rename it to
sles12sp3.iso
.
Install the templates, examples, and working model directories:
ardana >
/usr/bin/ardana-init
The following steps are necessary to set up a stand-alone deployer whether you will be using the Install UI or command line.
Copy the HPE Helion OpenStack Entry-scale KVM example input model to a stand-alone input model. This new input model will be edited so that it can be used as a stand-alone Cloud Lifecycle Manager installation.
tux >
cp -r ~/openstack/examples/entry-scale-kvm/* \
~/openstack/examples/entry-scale-kvm-stand-alone-deployer
Change to the new directory
tux >
cd ~/openstack/examples/entry-scale-kvm-stand-alone-deployer
Edit the cloudConfig.yml file to change the name of the input model. This will make the model available both to the Install UI and to the command line installation process.
Change name: entry-scale-kvm
to name:
entry-scale-kvm-stand-alone-deployer
Change to the data
directory.
Make the following edits to your configuration files.
The indentation of each of the input files is important and will cause errors if not done correctly. Use the existing content in each of these files as a reference when adding additional content for your Cloud Lifecycle Manager.
Update control_plane.yml
to add the Cloud Lifecycle Manager.
Update server_roles.yml
to add the Cloud Lifecycle Manager role.
Update net_interfaces.yml
to add the interface
definition for the Cloud Lifecycle Manager.
Create a disks_lifecycle_manager.yml
file to define
the disk layout for the Cloud Lifecycle Manager.
Update servers.yml
to add the dedicated Cloud Lifecycle Manager node.
Control_plane.yml
: The snippet below shows the
addition of a single node cluster into the control plane to host the Cloud Lifecycle Manager
service.
In addition to adding the new cluster, you also have to remove the Cloud Lifecycle Manager
component from the cluster1
in the examples.
clusters:
- name: cluster0
cluster-prefix: c0
server-role: LIFECYCLE-MANAGER-ROLE
member-count: 1
allocation-policy: strict
service-components:
- lifecycle-manager
- ntp-client
- name: cluster1
cluster-prefix: c1
server-role: CONTROLLER-ROLE
member-count: 3
allocation-policy: strict
service-components:
- ntp-server
This specifies a single node of role
LIFECYCLE-MANAGER-ROLE
hosting the Cloud Lifecycle Manager.
Server_roles.yml
: The snippet below shows the
insertion of the new server roles definition:
server-roles:
- name: LIFECYCLE-MANAGER-ROLE
interface-model: LIFECYCLE-MANAGER-INTERFACES
disk-model: LIFECYCLE-MANAGER-DISKS
- name: CONTROLLER-ROLE
This defines a new server role which references a new interface-model and disk-model to be used when configuring the server.
net-interfaces.yml
: The snippet below shows the
insertion of the network-interface info:
- name: LIFECYCLE-MANAGER-INTERFACES
network-interfaces:
- name: BOND0
device:
name: bond0
bond-data:
options:
mode: active-backup
miimon: 200
primary: hed3
provider: linux
devices:
- name: hed3
- name: hed4
network-groups:
- MANAGEMENT
This assumes that the server uses the same physical networking layout as the other servers in the example.
disks_lifecycle_manager.yml
: In the examples,
disk-models are provided as separate files (this is just a convention, not
a limitation) so the following should be added as a new file named
disks_lifecycle_manager.yml
:
---
product:
version: 2
disk-models:
- name: LIFECYCLE-MANAGER-DISKS
# Disk model to be used for Cloud Lifecycle Managers nodes
# /dev/sda_root is used as a volume group for /, /var/log and /var/crash
# sda_root is a templated value to align with whatever partition is really used
# This value is checked in os config and replaced by the partition actually used
# on sda e.g. sda1 or sda5
volume-groups:
- name: ardana-vg
physical-volumes:
- /dev/sda_root
logical-volumes:
# The policy is not to consume 100% of the space of each volume group.
# 5% should be left free for snapshots and to allow for some flexibility.
- name: root
size: 80%
fstype: ext4
mount: /
- name: crash
size: 15%
mount: /var/crash
fstype: ext4
mkfs-opts: -O large_file
consumer:
name: os
Servers.yml
: The snippet below shows the insertion of
an additional server used for hosting the Cloud Lifecycle Manager. Provide the address
information here for the server you are running on, that is, the node
where you have installed the HPE Helion OpenStack ISO.
servers:
# NOTE: Addresses of servers need to be changed to match your environment.
#
# Add additional servers as required
#Lifecycle-manager
- id: lifecycle-manager
ip-addr: YOUR IP ADDRESS HERE
role: LIFECYCLE-MANAGER-ROLE
server-group: RACK1
nic-mapping: HP-SL230-4PORT
mac-addr: 8c:dc:d4:b5:c9:e0
# ipmi information is not needed
# Controllers
- id: controller1
ip-addr: 192.168.10.3
role: CONTROLLER-ROLE
With the stand-alone input model complete, you are ready to proceed to installing the stand-alone deployer with either the Install UI or the command line.