Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE OpenStack Cloud 8

2 Pre-Installation Checklist Edit source

Important
Important

The formatting of this page facilitates printing it out and using it to record details of your setup.

This checklist is focused on the Entry-scale KVM model but you can alter it to fit the example configuration you choose for your cloud.

2.1 BIOS and IPMI Settings Edit source

Ensure that the following BIOS and IPMI settings are applied to each bare-metal server:

Item
 
  Choose either UEFI or Legacy BIOS in the BIOS settings
 

Verify the Date and Time settings in the BIOS.

Note
Note

SUSE OpenStack Cloud installs and runs with UTC, not local time.

 Ensure that Wake-on-LAN is disabled in the BIOS
  Ensure that the NIC port to be used for PXE installation has PXE enabled in the BIOS
 Ensure that all other NIC ports have PXE disabled in the BIOS
  Ensure all hardware in the server not directly used by SUSE OpenStack Cloud is disabled

2.2 Network Setup and Configuration Edit source

Before installing SUSE OpenStack Cloud, the following networks must be provisioned and tested. The networks are not installed or managed by the Cloud. You must install and manage the networks as documented in Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 9 “Example Configurations”.

Note that if you want a pluggable IPAM driver, it must be specified at install time. Only with a clean install of SUSE OpenStack Cloud 8 can you specify a different IPAM driver. If upgrading, you must use the default driver. More information can be found in Book “Operations Guide”, Chapter 9 “Managing Networking”, Section 9.3 “Networking Service Overview”, Section 9.3.7 “Using IPAM Drivers in the Networking Service”.

Use these checklists to confirm and record your network configuration information.

Router

The IP router used with SUSE OpenStack Cloud must support the updated of its ARP table through gratuitous ARP packets.

PXE Installation Network

When provisioning the IP range, allocate sufficient IP addresses to cover both the current number of servers and any planned expansion. Use the following table to help calculate the requirements:

InstanceDescriptionIPs
Deployer O/S 1
Controller server O/S (x3) 3
Compute servers (2nd thru 100th)single IP per server 
block storage host serverssingle IP per server 
ItemValue
 Network is untagged 
 No DHCP servers other than SUSE OpenStack Cloud are on the network 
 Switch PVID used to map any "internal" VLANs to untagged 
 Routable to the IPMI network 
 IP CIDR 
 IP Range (Usable IPs)

begin:

end:

 Default IP Gateway 

Management Network

The management network is the backbone used for the majority of SUSE OpenStack Cloud management communications. Control messages are exchanged between the Controllers, Compute hosts, and Cinder backends through this network. In addition to the control flows, the management network is also used to transport Swift and iSCSI based Cinder block storage traffic between servers.

When provisioning the IP Range, allocate sufficient IP addresses to cover both the current number of servers and any planned expansion. Use the following table to help calculate the requirements:

InstanceDescriptionIPs
Controller server O/S (x3) 3
Controller VIP 1
Compute servers (2nd through 100th)single IP per server 
VM serverssingle IP per server 
VIP per cluster  
ItemValue
 Network is untagged 
 No DHCP servers other than SUSE OpenStack Cloud are on the network 
 Switch PVID used to map any "internal" VLANs to untagged 
 IP CIDR 
 IP Range (Usable IPs)

begin:

end:

 Default IP Gateway 
 VLAN ID 

IPMI Network

The IPMI network is used to connect the IPMI interfaces on the servers that are assigned for use with implementing the cloud. This network is used by Cobbler to control the state of the servers during baremetal deployments.

ItemValue
 Network is untagged 
 Routable to the Management Network 
 IP Subnet 
 Default IP Gateway 

External API Network

The External network is used to connect OpenStack endpoints to an external public network such as a company’s intranet or the public internet in the case of a public cloud provider.

When provisioning the IP Range, allocate sufficient IP addresses to cover both the current number of servers and any planned expansion. Use the following table to help calculate the requirements.

InstanceDescriptionIPs
Controller server O/S (x3) 3
Controller VIP 1
ItemValue
 VLAN Tag assigned: 
 IP CIDR 
 IP Range (Usable IPs)

begin:

end:

 Default IP Gateway 
 VLAN ID 

External VM Network

The External VM network is used to connect cloud instances to an external public network such as a company’s intranet or the public internet in the case of a public cloud provider. The external network has a predefined range of Floating IPs which are assigned to individual instances to enable communications to and from the instance to the assigned corporate intranet/internet. There should be a route between the External VM and External API networks so that instances provisioned in the cloud, may access the Cloud API endpoints, using the instance floating IPs.

ItemValue
 VLAN Tag assigned: 
 IP CIDR 
 IP Range (Usable IPs)

begin:

end:

 Default IP Gateway 
 VLAN ID 

2.3 Cloud Lifecycle Manager Edit source

This server contains the SUSE OpenStack Cloud installer, which is based on Git, Ansible, and Cobbler.

ItemValue
  Disk Requirement: Single 8GB disk needed per the Book “Planning an Installation with Cloud Lifecycle Manager”, Chapter 2 “Hardware and Software Support Matrix”  
  Section 3.5.2, “Installing the SUSE OpenStack Cloud Extension”  
  Ensure your local DNS nameserver is placed into your /etc/resolv.conf file  
 Install and configure NTP for your environment 
  Ensure your NTP server(s) is placed into your /etc/ntp.conf file  
 NTP time source: 
   

2.4 Information for the nic_mappings.yml Input File Edit source

Log on to each type of physical server you have and issue platform-appropriate commands to identify the bus-address and port-num values that may be required. For example, run the following command:

sudo lspci -D | grep -i net

and enter this information in the space below. Use this information for the bus-address value in your nic_mappings.yml file.

NIC Adapter PCI Bus Address Output











To find the port-num use:

cat /sys/class/net/<device name>/dev_port

where the 'device-name' is the name of the device currently mapped to this address, not necessarily the name of the device to be mapped. Enter the information for your system in the space below.

Network Device Port Number Output











2.5 Control Plane Edit source

The Control Plane consists of at least three servers in a highly available cluster that host the core SUSE OpenStack Cloud services including Nova, Keystone, Glance, Cinder, Heat, Neutron, Swift, Ceilometer, and Horizon. Additional services include mariadb, ip-cluster, apache2, rabbitmq, memcached, zookeeper, kafka, storm, monasca, logging, and cmc.

Note
Note

To mitigate the split-brain situation described in Book “Operations Guide”, Chapter 15 “Troubleshooting Issues”, Section 15.4 “Network Service Troubleshooting” it is recommended that you have HA network configuration with Multi-Chassis Link Aggregation (MLAG) and NIC bonding configured for all the controllers to deliver system-level redundancy as well network-level resiliency. Also reducing the ARP timeout on the TOR switches will help.

Table 2.1: Control Plane 1
ItemValue
 Disk Requirement: 3x 512 GB disks (or enough space to create three logical drives with that amount of space)  
 Ensure the disks are wiped 
 MAC address of first NIC 
 A second NIC, or a set of bonded NICs are required 
 IPMI IP address 
 IPMI Username/Password 
Table 2.2: Control Plane 2
ItemValue
  Disk Requirement: 3x 512 GB disks (or enough space to create three logical drives with that amount of space)  
 Ensure the disks are wiped 
 MAC address of first NIC 
 A second NIC, or a set of bonded NICs are required 
 IPMI IP address 
 IPMI Username/Password 
Table 2.3: Control Plane 3
ItemValue
  Disk Requirement: 3x 512 GB disks (or enough space to create three logical drives with that amount of space)  
 Ensure the disks are wiped 
 MAC address of first NIC 
 A second NIC, or a set of bonded NICs are required 
 IPMI IP address 
 IPMI Username/Password 

2.6 Compute Hosts Edit source

One or more KVM Compute servers will be used as the compute host targets for instances.

Item
  Disk Requirement: 2x 512 GB disks (or enough space to create three logical drives with that amount of space)
  A NIC for PXE boot and a second NIC, or a NIC for PXE and a set of bonded NICs are required
 Ensure the disks are wiped

Table to record your Compute host details:

IDNIC MAC AddressIPMI Username/PasswordIMPI IP AddressCPU/Mem/Disk
     
     
     

2.7 Storage Hosts Edit source

Three or more servers with local disk volumes to provide Cinder block storage resources.

Note
Note

The cluster created from block storage nodes must allow for quorum. In other words, the node count of the cluster must be 3, 5, 7, or another odd number.

Item
 

Disk Requirement: 3x 512 GB disks (or enough space to create three logical drives with that amount of space)

The block storage appliance deployed on a host is expected to consume ~40 GB of disk space from the host root disk for ephemeral storage to run the block storage virtual machine.

  A NIC for PXE boot and a second NIC, or a NIC for PXE and a set of bonded NICs are required
 Ensure the disks are wiped

Table to record your block storage host details:

IDNIC MAC AddressIPMI Username/PasswordIPMI IP AddressCPU/Mem/DiskData Volume
      
      
      

2.8 Additional Comments Edit source

This section is for any additional information that you deem necessary.














Print this page