Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / Documentation / Deployment Guide using Cloud Lifecycle Manager / Pre-Installation / Pre-Installation Checklist
Applies to SUSE OpenStack Cloud 9

14 Pre-Installation Checklist

Important
Important

The formatting of this page facilitates printing it out and using it to record details of your setup.

This checklist is focused on the Entry-scale KVM model but you can alter it to fit the example configuration you choose for your cloud.

14.1 BIOS and IPMI Settings

Ensure that the following BIOS and IPMI settings are applied to each bare-metal server:

Item
  Choose either UEFI or Legacy BIOS in the BIOS settings
 

Verify the Date and Time settings in the BIOS.

Note
Note

SUSE OpenStack Cloud installs and runs with UTC, not local time.

 Ensure that Wake-on-LAN is disabled in the BIOS
  Ensure that the NIC port to be used for PXE installation has PXE enabled in the BIOS
 Ensure that all other NIC ports have PXE disabled in the BIOS
  Ensure all hardware in the server not directly used by SUSE OpenStack Cloud is disabled

14.2 Network Setup and Configuration

Before installing SUSE OpenStack Cloud, the following networks must be provisioned and tested. The networks are not installed or managed by the Cloud. You must install and manage the networks as documented in Chapter 9, Example Configurations.

Note that if you want a pluggable IPAM driver, it must be specified at install time. Only with a clean install of SUSE OpenStack Cloud 9 can you specify a different IPAM driver. If upgrading, you must use the default driver. More information can be found in Section 10.4.7, “Using IPAM Drivers in the Networking Service”.

Use these checklists to confirm and record your network configuration information.

Router

The IP router used with SUSE OpenStack Cloud must support the updated of its ARP table through gratuitous ARP packets.

PXE Installation Network

When provisioning the IP range, allocate sufficient IP addresses to cover both the current number of servers and any planned expansion. Use the following table to help calculate the requirements:

InstanceDescriptionIPs
Deployer O/S 1
Controller server O/S (x3) 3
Compute servers (2nd thru 100th)single IP per server 
block storage host serverssingle IP per server 
ItemValue
 Network is untagged 
 No DHCP servers other than SUSE OpenStack Cloud are on the network 
 Switch PVID used to map any "internal" VLANs to untagged 
 Routable to the IPMI network 
 IP CIDR 
 IP Range (Usable IPs)

begin:

end:

 Default IP Gateway 

Management Network

The management network is the backbone used for the majority of SUSE OpenStack Cloud management communications. Control messages are exchanged between the Controllers, Compute hosts, and cinder backends through this network. In addition to the control flows, the management network is also used to transport swift and iSCSI based cinder block storage traffic between servers.

When provisioning the IP Range, allocate sufficient IP addresses to cover both the current number of servers and any planned expansion. Use the following table to help calculate the requirements:

InstanceDescriptionIPs
Controller server O/S (x3) 3
Controller VIP 1
Compute servers (2nd through 100th)single IP per server 
VM serverssingle IP per server 
VIP per cluster  
ItemValue
 Network is untagged 
 No DHCP servers other than SUSE OpenStack Cloud are on the network 
 Switch PVID used to map any "internal" VLANs to untagged 
 IP CIDR 
 IP Range (Usable IPs)

begin:

end:

 Default IP Gateway 
 VLAN ID 

IPMI Network

The IPMI network is used to connect the IPMI interfaces on the servers that are assigned for use with implementing the cloud. This network is used by Cobbler to control the state of the servers during baremetal deployments.

ItemValue
 Network is untagged 
 Routable to the Management Network 
 IP Subnet 
 Default IP Gateway 

External API Network

The External network is used to connect OpenStack endpoints to an external public network such as a company’s intranet or the public internet in the case of a public cloud provider.

When provisioning the IP Range, allocate sufficient IP addresses to cover both the current number of servers and any planned expansion. Use the following table to help calculate the requirements.

InstanceDescriptionIPs
Controller server O/S (x3) 3
Controller VIP 1
ItemValue
 VLAN Tag assigned: 
 IP CIDR 
 IP Range (Usable IPs)

begin:

end:

 Default IP Gateway 
 VLAN ID 

External VM Network

The External VM network is used to connect cloud instances to an external public network such as a company’s intranet or the public internet in the case of a public cloud provider. The external network has a predefined range of Floating IPs which are assigned to individual instances to enable communications to and from the instance to the assigned corporate intranet/internet. There should be a route between the External VM and External API networks so that instances provisioned in the cloud, may access the Cloud API endpoints, using the instance floating IPs.

ItemValue
 VLAN Tag assigned: 
 IP CIDR 
 IP Range (Usable IPs)

begin:

end:

 Default IP Gateway 
 VLAN ID 

14.3 Cloud Lifecycle Manager

This server contains the SUSE OpenStack Cloud installer, which is based on Git, Ansible, and Cobbler.

ItemValue
  Disk Requirement: Single 8GB disk needed per the Chapter 2, Hardware and Software Support Matrix  
  Section 15.5.2, “Installing the SUSE OpenStack Cloud Extension”  
  Ensure your local DNS nameserver is placed into your /etc/resolv.conf file  
 Install and configure NTP for your environment 
  Ensure your NTP server(s) is placed into your /etc/ntp.conf file  
 NTP time source: 
   

14.4 Information for the nic_mappings.yml Input File

Log on to each type of physical server you have and issue platform-appropriate commands to identify the bus-address and port-num values that may be required. For example, run the following command:

sudo lspci -D | grep -i net

and enter this information in the space below. Use this information for the bus-address value in your nic_mappings.yml file.

NIC Adapter PCI Bus Address Output











To find the port-num use:

cat /sys/class/net/<device name>/dev_port

where the 'device-name' is the name of the device currently mapped to this address, not necessarily the name of the device to be mapped. Enter the information for your system in the space below.

Network Device Port Number Output











14.5 Control Plane

The Control Plane consists of at least three servers in a highly available cluster that host the core SUSE OpenStack Cloud services including nova, keystone, glance, cinder, heat, neutron, swift, ceilometer, and horizon. Additional services include mariadb, ip-cluster, apache2, rabbitmq, memcached, zookeeper, kafka, storm, monasca, logging, and cmc.

Note
Note

To mitigate the split-brain situation described in Section 18.4, “Network Service Troubleshooting” it is recommended that you have HA network configuration with Multi-Chassis Link Aggregation (MLAG) and NIC bonding configured for all the controllers to deliver system-level redundancy as well network-level resiliency. Also reducing the ARP timeout on the TOR switches will help.

Table 14.1: Control Plane 1
ItemValue
 Disk Requirement: 3x 512 GB disks (or enough space to create three logical drives with that amount of space)  
 Ensure the disks are wiped 
 MAC address of first NIC 
 A second NIC, or a set of bonded NICs are required 
 IPMI IP address 
 IPMI Username/Password 
Table 14.2: Control Plane 2
ItemValue
  Disk Requirement: 3x 512 GB disks (or enough space to create three logical drives with that amount of space)  
 Ensure the disks are wiped 
 MAC address of first NIC 
 A second NIC, or a set of bonded NICs are required 
 IPMI IP address 
 IPMI Username/Password 
Table 14.3: Control Plane 3
ItemValue
  Disk Requirement: 3x 512 GB disks (or enough space to create three logical drives with that amount of space)  
 Ensure the disks are wiped 
 MAC address of first NIC 
 A second NIC, or a set of bonded NICs are required 
 IPMI IP address 
 IPMI Username/Password 

14.6 Compute Hosts

One or more KVM Compute servers will be used as the compute host targets for instances.

Item
  Disk Requirement: 2x 512 GB disks (or enough space to create three logical drives with that amount of space)
  A NIC for PXE boot and a second NIC, or a NIC for PXE and a set of bonded NICs are required
 Ensure the disks are wiped

Table to record your Compute host details:

IDNIC MAC AddressIPMI Username/PasswordIMPI IP AddressCPU/Mem/Disk
     
     
     

14.7 Storage Hosts

Three or more servers with local disk volumes to provide cinder block storage resources.

Note
Note

The cluster created from block storage nodes must allow for quorum. In other words, the node count of the cluster must be 3, 5, 7, or another odd number.

Item
 

Disk Requirement: 3x 512 GB disks (or enough space to create three logical drives with that amount of space)

The block storage appliance deployed on a host is expected to consume ~40 GB of disk space from the host root disk for ephemeral storage to run the block storage virtual machine.

  A NIC for PXE boot and a second NIC, or a NIC for PXE and a set of bonded NICs are required
 Ensure the disks are wiped

Table to record your block storage host details:

IDNIC MAC AddressIPMI Username/PasswordIPMI IP AddressCPU/Mem/DiskData Volume
      
      
      

14.8 Additional Comments

This section is for any additional information that you deem necessary.