3 Recommended Hardware Minimums for the Example Configurations #
3.1 Recommended Hardware Minimums for an Entry-scale KVM #
These recommended minimums are based on example configurations included with the installation models (see Chapter 9, Example Configurations). They are suitable only for demo environments. For production systems you will want to consider your capacity and performance requirements when making decisions about your hardware.
The disk requirements detailed below can be met with logical drives, logical volumes, or external storage such as a 3PAR array.
Node Type | Role Name | Required Number | Server Hardware - Minimum Requirements and Recommendations | |||
---|---|---|---|---|---|---|
Disk | Memory | Network | CPU | |||
Dedicated Cloud Lifecycle Manager (optional) | Lifecycle-manager | 1 | 300 GB | 8 GB | 1 x 10 Gbit/s with PXE Support | 8 CPU (64-bit) cores total (Intel x86_64) |
Control Plane | Controller | 3 |
| 128 GB | 2 x 10 Gbit/s with one PXE enabled port | 8 CPU (64-bit) cores total (Intel x86_64) |
Compute | Compute | 1-3 | 2 x 600 GB (minimum) | 32 GB (memory must be sized based on the virtual machine instances hosted on the Compute node) | 2 x 10 Gbit/s with one PXE enabled port | 8 CPU (64-bit) cores total (Intel x86_64) with hardware virtualization support. The CPU cores must be sized based on the VM instances hosted by the Compute node. |
For more details about the supported network requirements, see Chapter 9, Example Configurations.
3.2 Recommended Hardware Minimums for an Entry-scale ESX KVM Model #
These recommended minimums are based on example configurations included with the installation models (see Chapter 9, Example Configurations). They are suitable only for demo environments. For production systems you will want to consider your capacity and performance requirements when making decisions about your hardware.
SUSE OpenStack Cloud currently supports the following ESXi versions:
ESXi version 6.0
ESXi version 6.0 (Update 1b)
ESXi version 6.5
The following are the requirements for your vCenter server:
Software: vCenter (It is recommended to run the same server version as the ESXi hosts.)
License Requirements: vSphere Enterprise Plus license
Node Type | Role Name | Required Number | Server Hardware - Minimum Requirements and Recommendations | |||
---|---|---|---|---|---|---|
Disk | Memory | Network | CPU | |||
Dedicated Cloud Lifecycle Manager (optional) | Lifecycle-manager | 1 | 300 GB | 8 GB | 1 x 10 Gbit/s with PXE Support | 8 CPU (64-bit) cores total (Intel x86_64) |
Control Plane | Controller | 3 |
| 128 GB | 2 x 10 Gbit/s with one PXE enabled port | 8 CPU (64-bit) cores total (Intel x86_64) |
Compute (ESXi hypervisor) | 2 | 2 x 1 TB (minimum, shared across all nodes) | 128 GB (minimum) | 2 x 10 Gbit/s +1 NIC (for DC access) | 16 CPU (64-bit) cores total (Intel x86_64) | |
Compute (KVM hypervisor) | kvm-compute | 1-3 | 2 x 600 GB (minimum) | 32 GB (memory must be sized based on the virtual machine instances hosted on the Compute node) | 2 x 10 Gbit/s with one PXE enabled port | 8 CPU (64-bit) cores total (Intel x86_64) with hardware virtualization support. The CPU cores must be sized based on the VM instances hosted by the Compute node. |
OVSvApp VM | on VMWare cluster | 1 | 80 GB | 4 GB | 3 VMXNET Virtual Network Adapters | 2 vCPU |
nova proxy VM | on VMWare cluster | 1 per cluster | 80 GB | 4 GB | 3 VMXNET Virtual Network Adapters | 2 vCPU |
3.3 Recommended Hardware Minimums for an Entry-scale ESX, KVM with Dedicated Cluster for Metering, Monitoring, and Logging #
These recommended minimums are based on example configurations included with the installation models (see Chapter 9, Example Configurations). They are suitable only for demo environments. For production systems you will want to consider your capacity and performance requirements when making decisions about your hardware.
SUSE OpenStack Cloud currently supports the following ESXi versions:
ESXi version 6.0
ESXi version 6.0 (Update 1b)
ESXi version 6.5
The following are the requirements for your vCenter server:
Software: vCenter (It is recommended to run the same server version as the ESXi hosts.)
License Requirements: vSphere Enterprise Plus license
Node Type | Role Name | Required Number | Server Hardware - Minimum Requirements and Recommendations | |||
---|---|---|---|---|---|---|
Disk | Memory | Network | CPU | |||
Dedicated Cloud Lifecycle Manager (optional) | Lifecycle-manager | 1 | 300 GB | 8 GB | 1 x 10 Gbit/s with PXE Support | 8 CPU (64-bit) cores total (Intel x86_64) |
Control Plane | Core-API Controller | 2 |
| 128 GB | 2 x 10 Gbit/s with PXE Support | 24 CPU (64-bit) cores total (Intel x86_64) |
DBMQ Cluster | 3 |
| 96 GB | 2 x 10 Gbit/s with PXE Support | 24 CPU (64-bit) cores total (Intel x86_64) | |
Metering Mon/Log Cluster | 3 |
| 128 GB | 2 x 10 Gbit/s with one PXE enabled port | 24 CPU (64-bit) cores total (Intel x86_64) | |
Compute (ESXi hypervisor) | 2 (minimum) | 2 X 1 TB (minimum, shared across all nodes) | 64 GB (memory must be sized based on the virtual machine instances hosted on the Compute node) | 2 x 10 Gbit/s +1 NIC (for Data Center access) | 16 CPU (64-bit) cores total (Intel x86_64) | |
Compute (KVM hypervisor) | kvm-compute | 1-3 | 2 X 600 GB (minimum) | 32 GB (memory must be sized based on the virtual machine instances hosted on the Compute node) | 2 x 10 Gbit/s with one PXE enabled port | 8 CPU (64-bit) cores total (Intel x86_64) with hardware virtualization support. The CPU cores must be sized based on the VM instances hosted by the Compute node. |
OVSvApp VM | on VMWare cluster | 1 | 80 GB | 4 GB | 3 VMXNET Virtual Network Adapters | 2 vCPU |
nova proxy VM | on VMWare cluster | 1 per cluster | 80 GB | 4 GB | 3 VMXNET Virtual Network Adapters | 2 vCPU |
3.4 Recommended Hardware Minimums for an Ironic Flat Network Model #
When using the agent_ilo
driver, you should ensure that
the most recent iLO controller firmware is installed. A recommended minimum
for the iLO4 controller is version 2.30.
The recommended minimum hardware requirements are based on the Chapter 9, Example Configurations included with the base installation and are suitable only for demo environments. For production systems you will want to consider your capacity and performance requirements when making decisions about your hardware.
Node Type | Role Name | Required Number | Server Hardware - Minimum Requirements and Recommendations | |||
---|---|---|---|---|---|---|
Disk | Memory | Network | CPU | |||
Dedicated Cloud Lifecycle Manager (optional) | Lifecycle-manager | 1 | 300 GB | 8 GB | 1 x 10 Gbit/s with PXE Support | 8 CPU (64-bit) cores total (Intel x86_64) |
Control Plane | Controller | 3 |
| 128 GB | 2 x 10 Gbit/s with one PXE enabled port | 8 CPU (64-bit) cores total (Intel x86_64) |
Compute | Compute | 1 | 1 x 600 GB (minimum) | 16 GB | 2 x 10 Gbit/s with one PXE enabled port | 16 CPU (64-bit) cores total (Intel x86_64) |
For more details about the supported network requirements, see Chapter 9, Example Configurations.
3.5 Recommended Hardware Minimums for an Entry-scale Swift Model #
These recommended minimums are based on the included Chapter 9, Example Configurations included with the base installation and are suitable only for demo environments. For production systems you will want to consider your capacity and performance requirements when making decisions about your hardware.
The entry-scale-swift
example runs the swift proxy,
account and container services on the three controller servers. However, it
is possible to extend the model to include the swift proxy, account and
container services on dedicated servers (typically referred to as the swift
proxy servers). If you are using this model, we have included the recommended
swift proxy servers specs in the table below.
Node Type | Role Name | Required Number | Server Hardware - Minimum Requirements and Recommendations | |||
---|---|---|---|---|---|---|
Disk | Memory | Network | CPU | |||
Dedicated Cloud Lifecycle Manager (optional) | Lifecycle-manager | 1 | 300 GB | 8 GB | 1 x 10 Gbit/s with PXE Support | 8 CPU (64-bit) cores total (Intel x86_64) |
Control Plane | Controller | 3 |
| 128 GB | 2 x 10 Gbit/s with one PXE enabled port | 8 CPU (64-bit) cores total (Intel x86_64) |
swift Object | swobj | 3 |
If using x3 replication only:
If using Erasure Codes only or a mix of x3 replication and Erasure Codes:
| 32 GB (see considerations at bottom of page for more details) | 2 x 10 Gbit/s with one PXE enabled port | 8 CPU (64-bit) cores total (Intel x86_64) |
swift Proxy, Account, and Container | swpac | 3 | 2 x 600 GB (minimum, see considerations at bottom of page for more details) | 64 GB (see considerations at bottom of page for more details) | 2 x 10 Gbit/s with one PXE enabled port | 8 CPU (64-bit) cores total (Intel x86_64) |
The disk speeds (RPM) chosen should be consistent within the same ring or storage policy. It is best to not use disks with mixed disk speeds within the same swift ring.
Considerations for your swift object and proxy, account, container servers RAM and disk capacity needs
swift can have a diverse number of hardware configurations. For example, a swift object server may have just a few disks (minimum of 6 for erasure codes) or up to 70 and beyond. The memory requirement needs to be increased as more disks are added. The general rule of thumb for memory needed is 0.5 GB per TB of storage. For example, a system with 24 hard drives at 8TB each, giving a total capacity of 192TB, should use 96GB of RAM. However, this does not work well for a system with a small number of small hard drives or a very large number of very large drives. So, if after calculating the memory given this guideline, if the answer is less than 32GB then go with 32GB of memory minimum and if the answer is over 256GB then use 256GB maximum, no need to use more memory than that.
When considering the capacity needs for the swift proxy, account, and container (PAC) servers, you should calculate 2% of the total raw storage size of your object servers to specify the storage required for the PAC servers. So, for example, if you were using the example we provided earlier and you had an object server setup of 24 hard drives with 8TB each for a total of 192TB and you had a total of 6 object servers, that would give a raw total of 1152TB. So you would take 2% of that, which is 23TB, and ensure that much storage capacity was available on your swift proxy, account, and container (PAC) server cluster. If you had a cluster of three swift PAC servers, that would be ~8TB each.
Another general rule of thumb is that if you are expecting to have more than a million objects in a container then you should consider using SSDs on the swift PAC servers rather than HDDs.