Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE OpenStack Cloud 8

12 Bare Metal

The Bare Metal service provides physical hardware management features.

12.1 Introduction

The Bare Metal service provides physical hardware as opposed to virtual machines. It also provides several reference drivers, which leverage common technologies like PXE and IPMI, to cover a wide range of hardware. The pluggable driver architecture also allows vendor-specific drivers to be added for improved performance or functionality not provided by reference drivers. The Bare Metal service makes physical servers as easy to provision as virtual machines in a cloud, which in turn will open up new avenues for enterprises and service providers.

12.2 System architecture

The Bare Metal service is composed of the following components:

  1. An admin-only RESTful API service, by which privileged users, such as operators and other services within the cloud control plane, may interact with the managed bare-metal servers.

  2. A conductor service, which conducts all activity related to bare-metal deployments. Functionality is exposed via the API service. The Bare Metal service conductor and API service communicate via RPC.

  3. Various drivers that support heterogeneous hardware, which enable features specific to unique hardware platforms and leverage divergent capabilities via a common API.

  4. A message queue, which is a central hub for passing messages, such as RabbitMQ. It should use the same implementation as that of the Compute service.

  5. A database for storing information about the resources. Among other things, this includes the state of the conductors, nodes (physical servers), and drivers.

When a user requests to boot an instance, the request is passed to the Compute service via the Compute service API and scheduler. The Compute service hands over this request to the Bare Metal service, where the request passes from the Bare Metal service API, to the conductor which will invoke a driver to successfully provision a physical server for the user.

12.3 Bare Metal deployment

  1. PXE deploy process

  2. Agent deploy process

12.4 Use Bare Metal

  1. Install the Bare Metal service.

  2. Setup the Bare Metal driver in the compute node's nova.conf file.

  3. Setup TFTP folder and prepare PXE boot loader file.

  4. Prepare the bare metal flavor.

  5. Register the nodes with correct drivers.

  6. Configure the driver information.

  7. Register the ports information.

  8. Use the openstack server create command to kick off the bare metal provision.

  9. Check nodes' provision state and power state.

12.4.1 Use multitenancy with Bare Metal service

12.4.1.1 Use multitenancy with Bare Metal service

Multitenancy allows creating a dedicated project network that extends the current Bare Metal (ironic) service capabilities of providing flat networks. Multitenancy works in conjunction with Networking (neutron) service to allow provisioning of a bare metal server onto the project network. Therefore, multiple projects can get isolated instances after deployment.

Bare Metal service provides the local_link_connection information to the Networking service ML2 driver. The ML2 driver uses that information to plug the specified port to the project network.

Table 12.1: local_link_connection fields

Field

Description

switch_id

Required. Identifies a switch and can be an LLDP-based MAC address or an OpenFlow-based datapath_id.

port_id

Required. Port ID on the switch, for example, Gig0/1.

switch_info

Optional. Used to distinguish different switch models or other vendor specific-identifier.

12.4.1.1.1 Configure Networking service ML2 driver

To enable the Networking service ML2 driver, edit the /etc/neutron/plugins/ml2/ml2_conf.ini file:

  1. Add the name of your ML2 driver.

  2. Add the vendor ML2 plugin configuration options.

[ml2]
...
mechanism_drivers = my_mechanism_driver

[my_vendor]
param_1 = ...
param_2 = ...
param_3 = ...

For more details, see Networking service mechanism drivers.

12.4.1.1.2 Configure Bare Metal service

After you configure the Networking service ML2 driver, configure Bare Metal service:

  1. Edit the /etc/ironic/ironic.conf for the ironic-conductor service. Set the network_interface node field to a valid network driver that is used to switch, clean, and provision networks.

    [DEFAULT]
    ...
    enabled_network_interfaces=flat,neutron
    
    [neutron]
    ...
    cleaning_network_uuid=$UUID
    provisioning_network_uuid=$UUID
    Warning
    Warning

    The cleaning_network_uuid and provisioning_network_uuid parameters are required for the neutron network interface. If they are not set, ironic-conductor fails to start.

  2. Set neutron to use Networking service ML2 driver:

    $ ironic node-create -n $NAME --network-interface neutron --driver agent_ipmitool
  3. Create a port with appropriate local_link_connection information. Set the pxe_enabled port attribute to True to create network ports for for the pxe_enabled ports only:

    $ ironic --ironic-api-version latest port-create -a $HW_MAC_ADDRESS \
      -n $NODE_UUID -l switch_id=$SWITCH_MAC_ADDRESS \
      -l switch_info=$SWITCH_HOSTNAME -l port_id=$SWITCH_PORT --pxe-enabled true

12.5 Troubleshooting

12.5.1 No valid host found error

Problem

Sometimes /var/log/nova/nova-conductor.log contains the following error:

NoValidHost: No valid host was found. There are not enough hosts available.

The message No valid host was found means that the Compute service scheduler could not find a bare metal node suitable for booting the new instance.

This means there will be some mismatch between resources that the Compute service expects to find and resources that Bare Metal service advertised to the Compute service.

Solution

If you get this message, check the following:

  1. Introspection should have succeeded for you before, or you should have entered the required bare-metal node properties manually. For each node in the ironic node-list command, use:

    $ ironic node-show <IRONIC-NODE-UUID>

    and make sure that properties JSON field has valid values for keys cpus, cpu_arch, memory_mb and local_gb.

  2. The flavor in the Compute service that you are using does not exceed the bare-metal node properties above for a required number of nodes. Use:

    $ openstack flavor show FLAVOR
  3. Make sure that enough nodes are in available state according to the ironic node-list command. Nodes in manageable state usually mean they have failed introspection.

  4. Make sure nodes you are going to deploy to are not in maintenance mode. Use the ironic node-list command to check. A node automatically going to maintenance mode usually means the incorrect credentials for this node. Check them and then remove maintenance mode:

    $ ironic node-set-maintenance <IRONIC-NODE-UUID> off
  5. It takes some time for nodes information to propagate from the Bare Metal service to the Compute service after introspection. Our tooling usually accounts for it, but if you did some steps manually there may be a period of time when nodes are not available to the Compute service yet. Check that the openstack hypervisor stats show command correctly shows total amount of resources in your system.

Print this page