Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / Documentation / Deployment Guide using Cloud Lifecycle Manager / Cloud Installation / Integrating NSX for vSphere
Applies to SUSE OpenStack Cloud 9

28 Integrating NSX for vSphere

This section describes the installation and integration of NSX-v, a Software Defined Networking (SDN) network virtualization and security platform for VMware's vSphere.

VMware's NSX embeds networking and security functionality, normally handled by hardware, directly into the hypervisor. NSX can reproduce, in software, an entire networking environment, and provides a complete set of logical networking elements and services including logical switching, routing, firewalling, load balancing, VPN, QoS, and monitoring. Virtual networks are programmatically provisioned and managed independent of the underlying hardware.

VMware's neutron plugin called NSX for vSphere (NSX-v) has been tested under the following scenarios:

  • Virtual SUSE OpenStack Cloud deployment

  • Baremetal SUSE OpenStack Cloud deployment

Installation instructions are provided for both scenarios. This documentation is meant as an example of how to integrate VMware's NSX-v neutron plugin with SUSE OpenStack Cloud. The examples in this documentation are not suitable for all environments. To configure this for your specific environment, use the design guide Reference Design: VMware® NSX for vSphere (NSX) Network Virtualization Design Guide.

This section includes instructions for:

  • Integrating with NSX for vSphere on Baremetal

  • Integrating with NSX for vSphere on virtual machines with changes necessary for Baremetal integration

  • Verifying NSX-v functionality

28.1 Integrating with NSX for vSphere

This section describes the installation steps and requirements for integrating with NSX for vSphere on virtual machines and baremetal hardware.

28.1.1 Pre-Integration Checklist

The following installation and integration instructions assumes an understanding of VMware's ESXI and vSphere products for setting up virtual environments.

Please review the following requirements for the VMware vSphere environment.

Software Requirements

Before you install or upgrade NSX, verify your software versions. The following are the required versions.

Software

Version

SUSE OpenStack Cloud

8

VMware NSX-v Manager

6.3.4 or higher

VMWare NSX-v neutron Plugin

Pike Release (TAG=11.0.0)

VMWare ESXi and vSphere Appliance (vSphere web Client)

6.0 or higher

A vCenter server (appliance) is required to manage the vSphere environment. It is recommended that you install a vCenter appliance as an ESX virtual machine.

Important
Important

Each ESXi compute cluster is required to have shared storage between the hosts in the cluster, otherwise attempts to create instances through nova-compute will fail.

28.1.2 Installing OpenStack

OpenStack can be deployed in two ways: on baremetal (physical hardware) or in an ESXi virtual environment on virtual machines. The following instructions describe how to install OpenStack.

Note
Note

Changes for installation on baremetal hardware are noted in each section.

This deployment example will consist of two ESXi clusters at minimum: a control-plane cluster and a compute cluster. The control-plane cluster must have 3 ESXi hosts minimum (due to VMware's recommendation that each NSX controller virtual machine is on a separate host). The compute cluster must have 2 ESXi hosts minimum. There can be multiple compute clusters. The following table outlines the virtual machine specifications to be built in the control-plane cluster:

Table 28.1: NSX Hardware Requirements for Virtual Machine Integration

Virtual Machine Role

Required Number

Disk

Memory

Network

CPU

Dedicated lifecycle manager

Baremetal - not needed

1

100GB

8GB

3 VMXNET Virtual Network Adapters

4 vCPU

Controller virtual machines

Baremetal - not needed

3

3 x 300GB

32GB

3 VMXNET Virtual Network Adapters

8 vCPU

Compute virtual machines

1 per compute cluster

80GB

4GB

3 VMXNET Virtual Network Adapters

2 vCPU

NSX Edge Gateway/DLR/Metadata-proxy appliances

Autogenerated by NSXv

Autogenerated by NSXv

Autogenerated by NSXv

Autogenerated by NSXv

Baremetal: In addition to the ESXi hosts, it is recommended to have one physical host for the Cloud Lifecycle Manager node and three physical hosts for the controller nodes.

28.1.2.1 Network Requirements

NSX-v requires the following for networking:

  • The ESXi hosts, vCenter, and the NSX Manager appliance must resolve DNS lookup.

  • The ESXi host must have the NTP service configured and enabled.

  • Jumbo frames must be enabled on the switch ports that the ESXi hosts are connected to.

  • The ESXi hosts must have at least 2 physical network cards each.

28.1.2.2 Network Model

The model in these instructions requires the following networks:

ESXi Hosts and vCenter

This is the network that the ESXi hosts and vCenter use to route traffic with.

NSX Management

The network which the NSX controllers and NSX Manager will use.

NSX VTEP Pool

The network that NSX uses to create endpoints for VxLAN tunnels.

Management

The network that OpenStack uses for deployment and maintenance of the cloud.

Internal API (optional)

The network group that will be used for management (private API) traffic within the cloud.

External API

This is the network that users will use to make requests to the cloud.

External VM

VLAN-backed provider network for external access to guest VMs (floating IPs).

28.1.2.3 vSphere port security settings

Baremetal: Even though the OpenStack deployment is on baremetal, it is still necessary to define each VLAN within a vSphere Distributed Switch for the nova compute proxy virtual machine.

The vSphere port security settings for both VMs and baremetal are shown in the table below.

Network Group

VLAN Type

Interface

vSphere Port Group Security Settings

ESXi Hosts and vCenter

Tagged

N/A

Defaults

NSX Manager

Must be able to reach ESXi Hosts and vCenter

Tagged

N/A

Defaults

NSX VTEP Pool

Tagged

N/A

Defaults

Management

Tagged or Untagged

eth0

Promiscuous Mode: Accept

MAC Address Changes: Reject

Forged Transmits:Reject

Internal API (Optional, may be combined with the Management Network. If network segregation is required for security reasons, you can keep this as a separate network.)

Tagged

eth2

Promiscuous Mode: Accept

MAC Address Changes: Reject

Forged Transmits: Accept

External API (Public)

Tagged

eth1

Promiscuous Mode: Accept

MAC Address Changes: Reject

Forged Transmits: Accept

External VM

Tagged

N/A

Promiscuous Mode: Accept

MAC Address Changes: Reject

Forged Transmits: Accept

Baremetal Only: IPMI

Untagged

N/A

N/A

28.1.2.4 Configuring the vSphere Environment

Before deploying OpenStack with NSX-v, the VMware vSphere environment must be properly configured, including setting up vSphere distributed switches and port groups. For detailed instructions, see Chapter 27, Installing ESX Computes and OVSvAPP.

Installing and configuring the VMware NSX Manager and creating the NSX network within the vSphere environment is covered below.

Before proceeding with the installation, ensure that the following are configured in the vSphere environment.

  • The vSphere datacenter is configured with at least two clusters, one control-plane cluster and one compute cluster.

  • Verify that all software, hardware, and networking requirements have been met.

  • Ensure the vSphere distributed virtual switches (DVS) are configured for each cluster.

Note
Note

The MTU setting for each DVS should be set to 1600. NSX should automatically apply this setting to each DVS during the setup process. Alternatively, the setting can be manually applied to each DVS before setup if desired.

Make sure there is a copy of the SUSE Linux Enterprise Server 12 SP4 .iso in the ardana home directory, var/lib/ardana, and that it is called sles12sp4.iso.

Install the open-vm-tools package.

tux > sudo zypper install open-vm-tools
28.1.2.4.1 Install NSX Manager

The NSX Manager is the centralized network management component of NSX. It provides a single point of configuration and REST API entry-points.

The NSX Manager is installed as a virtual appliance on one of the ESXi hosts within the vSphere environment. This guide will cover installing the appliance on one of the ESXi hosts within the control-plane cluster. For more detailed information, refer to VMware's NSX Installation Guide.

To install the NSX Manager, download the virtual appliance from VMware and deploy the appliance within vCenter onto one of the ESXi hosts. For information on deploying appliances within vCenter, refer to VMware's documentation for ESXi 5.5 or 6.0.

During the deployment of the NSX Manager appliance, be aware of the following:

When prompted, select Accept extra configuration options. This will present options for configuring IPv4 and IPv6 addresses, the default gateway, DNS, NTP, and SSH properties during the installation, rather than configuring these settings manually after the installation.

  • Choose an ESXi host that resides within the control-plane cluster.

  • Ensure that the network mapped port group is the DVS port group that represents the VLAN the NSX Manager will use for its networking (in this example it is labeled as the NSX Management network).

Note
Note

The IP address assigned to the NSX Manager must be able to resolve reverse DNS.

Power on the NSX Manager virtual machine after it finishes deploying and wait for the operating system to fully load. When ready, carry out the following steps to have the NSX Manager use single sign-on (SSO) and to register the NSX Manager with vCenter:

  1. Open a web browser and enter the hostname or IP address that was assigned to the NSX Manager during setup.

  2. Log in with the username admin and the password set during the deployment.

  3. After logging in, click on Manage vCenter Registration.

  4. Configure the NSX Manager to connect to the vCenter server.

  5. Configure NSX manager for single sign on (SSO) under the Lookup Server URL section.

Note
Note

When configuring SSO, use Lookup Service Port 443 for vCenter version 6.0. Use Lookup Service Port 7444 for vCenter version 5.5.

SSO makes vSphere and NSX more secure by allowing the various components to communicate with each other through a secure token exchange mechanism, instead of requiring each component to authenticate a user separately. For more details, refer to VMware's documentation on Configure Single Sign-On.

Both the Lookup Service URL and the vCenter Server sections should have a status of connected when configured properly.

Log into the vSphere Web Client (log out and and back in if already logged in). The NSX Manager will appear under the Networking & Security section of the client.

Note
Note

The Networking & Security section will not appear under the vSphere desktop client. Use of the web client is required for the rest of this process.

28.1.2.4.2 Add NSX Controllers

The NSX controllers serve as the central control point for all logical switches within the vSphere environment's network, and they maintain information about all hosts, logical switches (VXLANs), and distributed logical routers.

NSX controllers will each be deployed as a virtual appliance on the ESXi hosts within the control-plane cluster to form the NSX Controller cluster. For details about NSX controllers and the NSX control plane in general, refer to VMware's NSX documentation.

Important
Important

Whatever the size of the NSX deployment, the following conditions must be met:

  • Each NSX Controller cluster must contain three controller nodes. Having a different number of controller nodes is not supported.

  • Before deploying NSX Controllers, you must deploy an NSX Manager appliance and register vCenter with NSX Manager.

  • Determine the IP pool settings for your controller cluster, including the gateway and IP address range. DNS settings are optional.

  • The NSX Controller IP network must have connectivity to the NSX Manager and to the management interfaces on the ESXi hosts.

Log in to the vSphere web client and do the following steps to add the NSX controllers:

  1. In vCenter, navigate to Home, select Networking & Security › Installation, and then select the Management tab.

  2. In the NSX Controller nodes section, click the Add Node icon represented by a green plus sign.

  3. Enter the NSX Controller settings appropriate to your environment. If you are following this example, use the control-plane clustered ESXi hosts and control-plane DVS port group for the controller settings.

  4. If it has not already been done, create an IP pool for the NSX Controller cluster with at least three IP addressess by clicking New IP Pool. Individual controllers can be in separate IP subnets, if necessary.

  5. Click OK to deploy the controller. After the first controller is completely deployed, deploy two additional controllers.

Important
Important

Three NSX controllers is mandatory. VMware recommends configuring a DRS anti-affinity rule to prevent the controllers from residing on the same ESXi host. See more information about DRS Affinity Rules.

28.1.2.4.3 Prepare Clusters for NSX Management

During Host Preparation, the NSX Manager:

  • Installs the NSX kernel modules on ESXi hosts that are members of vSphere clusters

  • Builds the NSX control-plane and management-plane infrastructure

The NSX kernel modules are packaged in VIB (vSphere Installation Bundle) files. They run within the hypervisor kernel and provide services such as distributed routing, distributed firewall, and VXLAN bridging capabilities. These files are installed on a per-cluster level, and the setup process deploys the required software on all ESXi hosts in the target cluster. When a new ESXi host is added to the cluster, the required software is automatically installed on the newly added host.

Before beginning the NSX host preparation process, make sure of the following in your environment:

  • Register vCenter with NSX Manager and deploy the NSX controllers.

  • Verify that DNS reverse lookup returns a fully qualified domain name when queried with the IP address of NSX Manager.

  • Verify that the ESXi hosts can resolve the DNS name of vCenter server.

  • Verify that the ESXi hosts can connect to vCenter Server on port 80.

  • Verify that the network time on vCenter Server and the ESXi hosts is synchronized.

  • For each vSphere cluster that will participate in NSX, verify that the ESXi hosts within each respective cluster are attached to a common VDS.

    For example, given a deployment with two clusters named Host1 and Host2. Host1 is attached to VDS1 and VDS2. Host2 is attached to VDS1 and VDS3. When you prepare a cluster for NSX, you can only associate NSX with VDS1 on the cluster. If you add another host (Host3) to the cluster and Host3 is not attached to VDS1, it is an invalid configuration, and Host3 will not be ready for NSX functionality.

  • If you have vSphere Update Manager (VUM) in your environment, you must disable it before preparing clusters for network virtualization. For information on how to check if VUM is enabled and how to disable it if necessary, see the VMware knowledge base.

  • In the vSphere web client, ensure that the cluster is in the resolved state (listed under the Host Preparation tab). If the Resolve option does not appear in the cluster's Actions list, then it is in a resolved state.

To prepare the vSphere clusters for NSX:

  1. In vCenter, select Home › Networking & Security › Installation, and then select the Host Preparation tab.

  2. Continuing with the example in these instructions, click on the Actions button (gear icon) and select Install for both the control-plane cluster and compute cluster (if you are using something other than this example, then only install on the clusters that require NSX logical switching, routing, and firewalls).

  3. Monitor the installation until the Installation Status column displays a green check mark.

    Important
    Important

    While installation is in progress, do not deploy, upgrade, or uninstall any service or component.

    Important
    Important

    If the Installation Status column displays a red warning icon and says Not Ready, click Resolve. Clicking Resolve might result in a reboot of the host. If the installation is still not successful, click the warning icon. All errors will be displayed. Take the required action and click Resolve again.

  4. To verify the VIBs (esx-vsip and esx-vxlan) are installed and registered, SSH into an ESXi host within the prepared cluster. List the names and versions of the VIBs installed by running the following command:

    tux > esxcli software vib list | grep esx
    ...
    esx-vsip      6.0.0-0.0.2732470    VMware  VMwareCertified   2015-05-29
    esx-vxlan     6.0.0-0.0.2732470    VMware  VMwareCertified   2015-05-29
    ...
Important
Important

After host preparation:

  • A host reboot is not required

  • If you add a host to a prepared cluster, the NSX VIBs are automatically installed on the host.

  • If you move a host to an unprepared cluster, the NSX VIBs are automatically uninstalled from the host. In this case, a host reboot is required to complete the uninstall process.

28.1.2.4.4 Configure VXLAN Transport Parameters

VXLAN is configured on a per-cluster basis, where each vSphere cluster that is to participate in NSX is mapped to a vSphere Distributed Virtual Switch (DVS). When mapping a vSphere cluster to a DVS, each ESXi host in that cluster is enabled for logical switches. The settings chosen in this section will be used in creating the VMkernel interface.

Configuring transport parameters involves selecting a DVS, a VLAN ID, an MTU size, an IP addressing mechanism, and a NIC teaming policy. The MTU for each switch must be set to 1550 or higher. By default, it is set to 1600 by NSX. This is also the recommended setting for integration with OpenStack.

To configure the VXLAN transport parameters:

  1. In the vSphere web client, navigate to Home › Networking & Security › Installation.

  2. Select the Host Preparation tab.

  3. Click the Configure link in the VXLAN column.

  4. Enter the required information.

  5. If you have not already done so, create an IP pool for the VXLAN tunnel end points (VTEP) by clicking New IP Pool:

  6. Click OK to create the VXLAN network.

When configuring the VXLAN transport network, consider the following:

  • Use a NIC teaming policy that best suits the environment being built. Load Balance - SRCID as the VMKNic teaming policy is usually the most flexible out of all the available options. This allows each host to have a VTEP vmkernel interface for each dvuplink on the selected distributed switch (two dvuplinks gives two VTEP interfaces per ESXi host).

  • Do not mix different teaming policies for different portgroups on a VDS where some use Etherchannel or Link Aggregation Control Protocol (LACPv1 or LACPv2) and others use a different teaming policy. If uplinks are shared in these different teaming policies, traffic will be interrupted. If logical routers are present, there will be routing problems. Such a configuration is not supported and should be avoided.

  • For larger environments it may be better to use DHCP for the VMKNic IP Addressing.

  • For more information and further guidance, see the VMware NSX for vSphere Network Virtualization Design Guide.

28.1.2.4.5 Assign Segment ID Pool

Each VXLAN tunnel will need a segment ID to isolate its network traffic. Therefore, it is necessary to configure a segment ID pool for the NSX VXLAN network to use. If an NSX controller is not deployed within the vSphere environment, a multicast address range must be added to spread traffic across the network and avoid overloading a single multicast address.

For the purposes of the example in these instructions, do the following steps to assign a segment ID pool. Otherwise, follow best practices as outlined in VMware's documentation.

  1. In the vSphere web client, navigate to Home › Networking & Security › Installation.

  2. Select the Logical Network Preparation tab.

  3. Click Segment ID, and then Edit.

  4. Click OK to save your changes.

28.1.2.4.6 Create a Transport Zone

A transport zone controls which hosts a logical switch can reach and has the following characteristics.

  • It can span one or more vSphere clusters.

  • Transport zones dictate which clusters can participate in the use of a particular network. Therefore they dictate which VMs can participate in the use of a particular network.

  • A vSphere NSX environment can contain one or more transport zones based on the environment's requirements.

  • A host cluster can belong to multiple transport zones.

  • A logical switch can belong to only one transport zone.

Note
Note

OpenStack has only been verified to work with a single transport zone within a vSphere NSX-v environment. Other configurations are currently not supported.

For more information on transport zones, refer to VMware's Add A Transport Zone.

To create a transport zone:

  1. In the vSphere web client, navigate to Home › Networking & Security › Installation.

  2. Select the Logical Network Preparation tab.

  3. Click Transport Zones, and then click the New Transport Zone (New Logical Switch) icon.

  4. In the New Transport Zone dialog box, type a name and an optional description for the transport zone.

  5. For these example instructions, select the control plane mode as Unicast.

    Note
    Note

    Whether there is a controller in the environment or if the environment is going to use multicast addresses will determine the control plane mode to select:

    • Unicast (what this set of instructions uses): The control plane is handled by an NSX controller. All unicast traffic leverages optimized headend replication. No multicast IP addresses or special network configuration is required.

    • Multicast: Multicast IP addresses in the physical network are used for the control plane. This mode is recommended only when upgrading from older VXLAN deployments. Requires PIM/IGMP in the physical network.

    • Hybrid: Offloads local traffic replication to the physical network (L2 multicast). This requires IGMP snooping on the first-hop switch and access to an IGMP querier in each VTEP subnet, but does not require PIM. The first-hop switch handles traffic replication for the subnet.

  6. Select the clusters to be added to the transport zone.

  7. Click OK to save your changes.

28.1.2.4.7 Deploying SUSE OpenStack Cloud

With vSphere environment setup completed, the OpenStack can be deployed. The following sections will cover creating virtual machines within the vSphere environment, configuring the cloud model and integrating NSX-v neutron core plugin into the OpenStack:

  1. Create the virtual machines

  2. Deploy the Cloud Lifecycle Manager

  3. Configure the neutron environment with NSX-v

  4. Modify the cloud input model

  5. Set up the parameters

  6. Deploy the Operating System with Cobbler

  7. Deploy the cloud

28.1.2.5 Deploying SUSE OpenStack Cloud

Within the vSphere environment, create the OpenStack virtual machines. At minimum, there must be the following:

  • One Cloud Lifecycle Manager deployer

  • Three OpenStack controllers

  • One OpenStack neutron compute proxy

For the minimum NSX hardware requirements, refer to Table 28.1, “NSX Hardware Requirements for Virtual Machine Integration”.

If ESX VMs are to be used as nova compute proxy nodes, set up three LAN interfaces in each virtual machine as shown in the networking model table below. There must be at least one nova compute proxy node per cluster.

Network Group

Interface

Management

eth0

External API

eth1

Internal API

eth2

28.1.2.5.1 Advanced Configuration Option
Important
Important

Within vSphere for each in the virtual machine:

  • In the Options section, under Advanced configuration parameters, ensure that disk.EnableUUIDoption is set to true.

  • If the option does not exist, it must be added. This option is required for the OpenStack deployment.

  • If the option is not specified, then the deployment will fail when attempting to configure the disks of each virtual machine.

28.1.2.5.2 Setting Up the Cloud Lifecycle Manager
28.1.2.5.2.1 Installing the Cloud Lifecycle Manager

Running the ARDANA_INIT_AUTO=1 command is optional to avoid stopping for authentication at any step. You can also run ardana-initto launch the Cloud Lifecycle Manager. You will be prompted to enter an optional SSH passphrase, which is used to protect the key used by Ansible when connecting to its client nodes. If you do not want to use a passphrase, press Enter at the prompt.

If you have protected the SSH key with a passphrase, you can avoid having to enter the passphrase on every attempt by Ansible to connect to its client nodes with the following commands:

ardana > eval $(ssh-agent)
ardana > ssh-add ~/.ssh/id_rsa

The Cloud Lifecycle Manager will contain the installation scripts and configuration files to deploy your cloud. You can set up the Cloud Lifecycle Manager on a dedicated node or you do so on your first controller node. The default choice is to use the first controller node as the Cloud Lifecycle Manager.

  1. Download the product from:

    1. SUSE Customer Center

  2. Boot your Cloud Lifecycle Manager from the SLES ISO contained in the download.

  3. Enter install (all lower-case, exactly as spelled out here) to start installation.

  4. Select the language. Note that only the English language selection is currently supported.

  5. Select the location.

  6. Select the keyboard layout.

  7. Select the primary network interface, if prompted:

    1. Assign IP address, subnet mask, and default gateway

  8. Create new account:

    1. Enter a username.

    2. Enter a password.

    3. Enter time zone.

Once the initial installation is finished, complete the Cloud Lifecycle Manager setup with these steps:

  1. Ensure your Cloud Lifecycle Manager has a valid DNS nameserver specified in /etc/resolv.conf.

  2. Set the environment variable LC_ALL:

    export LC_ALL=C
    Note
    Note

    This can be added to ~/.bashrc or /etc/bash.bashrc.

The node should now have a working SLES setup.

28.1.2.5.3 Configure the Neutron Environment with NSX-v

In summary, integrating NSX with vSphere has four major steps:

  1. Modify the input model to define the server roles, servers, network roles and networks. Section 28.1.2.5.3.2, “Modify the Input Model”

  2. Set up the parameters needed for neutron and nova to communicate with the ESX and NSX Manager. Section 28.1.2.5.3.3, “Deploying the Operating System with Cobbler”

  3. Do the steps to deploy the cloud. Section 28.1.2.5.3.4, “Deploying the Cloud”

28.1.2.5.3.1 Third-Party Import of VMware NSX-v Into neutron and python-neutronclient

To import the NSX-v neutron core-plugin into Cloud Lifecycle Manager, run the third-party import playbook.

ardana > cd ~/openstack/ardana/ansible
ardana > ansible-playbook -i hosts/localhost third-party-import.yml
28.1.2.5.3.2 Modify the Input Model

After the third-party import has completed successfully, modify the input model:

  1. Prepare for input model changes

  2. Define the servers and server roles needed for a NSX-v cloud.

  3. Define the necessary networks and network groups

  4. Specify the services needed to be deployed on the Cloud Lifecycle Manager controllers and the nova ESX compute proxy nodes.

  5. Commit the changes and run the configuration processor.

28.1.2.5.3.2.1 Prepare for Input Model Changes

The previous steps created a modified SUSE OpenStack Cloud tarball with the NSX-v core plugin in the neutron and neutronclient venvs. The tar file can now be extracted and the ardana-init.bash script can be run to set up the deployment files and directories. If a modified tar file was not created, then extract the tar from the /media/cdrom/ardana location.

To run the ardana-init.bash script which is included in the build, use this commands:

ardana > ~/ardana/hos-init.bash
28.1.2.5.3.2.2 Create the Input Model

Copy the example input model to ~/openstack/my_cloud/definition/ directory:

ardana > cd ~/ardana-extensions/ardana-extensions-nsx/vmware/examples/models/entry-scale-nsx
ardana > cp -R entry-scale-nsx ~/openstack/my_cloud/definition

Refer to the reference input model in ardana-extensions/ardana-extensions-nsx/vmware/examples/models/entry-scale-nsx/ for details about how these definitions should be made. The main differences between this model and the standard Cloud Lifecycle Manager input models are:

  • Only the neutron-server is deployed. No other neutron agents are deployed.

  • Additional parameters need to be set in pass_through.yml and nsx/nsx_config.yml.

  • nova ESX compute proxy nodes may be ESX virtual machines.

28.1.2.5.3.2.2.1 Set up the Parameters

The special parameters needed for the NSX-v integrations are set in the files pass_through.yml and nsx/nsx_config.yml. They are in the ~/openstack/my_cloud/definition/data directory.

Parameters in pass_through.yml are in the sample input model in the ardana-extensions/ardana-extensions-nsx/vmware/examples/models/entry-scale-nsx/ directory. The comments in the sample input model file describe how to locate the values of the required parameters.

#
# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
  version: 2
pass-through:
  global:
    vmware:
      - username: VCENTER_ADMIN_USERNAME
        ip: VCENTER_IP
        port: 443
        cert_check: false
        # The password needs to be encrypted using the script
        # openstack/ardana/ansible/ardanaencrypt.py on the deployer:
        #
        # $ cd ~/openstack/ardana/ansible
        # $ export ARDANA_USER_PASSWORD_ENCRYPT_KEY=ENCRYPTION_KEY
        # $ ./ardanaencrypt.py
        #
        # The script will prompt for the vCenter password. The string
        # generated is the encrypted password. Enter the string
        # enclosed by double-quotes below.
        password: "ENCRYPTED_PASSWD_FROM_ARDANAENCRYPT"

        # The id is is obtained by the URL
        # https://VCENTER_IP/mob/?moid=ServiceInstance&doPath=content%2eabout,
        # field instanceUUID.
        id: VCENTER_UUID
  servers:
    -
      # Here the 'id' refers to the name of the node running the
      # esx-compute-proxy. This is identical to the 'servers.id' in
      # servers.yml. There should be one esx-compute-proxy node per ESX
      # resource pool.
      id: esx-compute1
      data:
        vmware:
          vcenter_cluster: VMWARE_CLUSTER1_NAME
          vcenter_id: VCENTER_UUID
    -
      id: esx-compute2
      data:
        vmware:
          vcenter_cluster: VMWARE_CLUSTER2_NAME
          vcenter_id: VCENTER_UUID

There are parameters in nsx/nsx_config.yml. The comments describes how to retrieve the values.

# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2
  configuration-data:
    - name: NSX-CONFIG-CP1
      services:
        - nsx
      data:
        # (Required) URL for NSXv manager (e.g - https://management_ip).
        manager_uri: 'https://NSX_MGR_IP

        # (Required) NSXv username.
        user: 'admin'

        # (Required) Encrypted NSX Manager password.
        # Password encryption is done by the script
        # ~/openstack/ardana/ansible/ardanaencrypt.py on the deployer:
        #
        # $ cd ~/openstack/ardana/ansible
        # $ export ARDANA_USER_PASSWORD_ENCRYPT_KEY=ENCRYPTION_KEY
        # $ ./ardanaencrypt.py
        #
        # NOTE: Make sure that the NSX Manager password is encrypted with the same key
        # used to encrypt the VCenter password.
        #
        # The script will prompt for the NSX Manager password. The string
        # generated is the encrypted password. Enter the string enclosed
        # by double-quotes below.
        password: "ENCRYPTED_NSX_MGR_PASSWD_FROM_ARDANAENCRYPT"
        # (Required) datacenter id for edge deployment.
        # Retrieved using
        #    http://VCENTER_IP_ADDR/mob/?moid=ServiceInstance&doPath=content
        # click on the value from the rootFolder property. The datacenter_moid is
        # the value of the childEntity property.
        # The vCenter-ip-address comes from the file pass_through.yml in the
        # input model under "pass-through.global.vmware.ip".
        datacenter_moid: 'datacenter-21'
        # (Required) id of logic switch for physical network connectivity.
        # How to retrieve
        # 1. Get to the same page where the datacenter_moid is found.
        # 2. Click on the value of the rootFolder property.
        # 3. Click on the value of the childEntity property
        # 4. Look at the network property. The external network is
        #    network associated with EXTERNAL VM in VCenter.
        external_network: 'dvportgroup-74'
        # (Required) clusters ids containing OpenStack hosts.
        # Retrieved using http://VCENTER_IP_ADDR/mob, click on the value
        # from the rootFolder property. Then click on the value of the
        # hostFolder property. Cluster_moids are the values under childEntity
        # property of the compute clusters.
        cluster_moid: 'domain-c33,domain-c35'
        # (Required) resource-pool id for edge deployment.
        resource_pool_id: 'resgroup-67'
        # (Optional) datastore id for edge deployment. If not needed,
        # do not declare it.
        # datastore_id: 'datastore-117'

        # (Required) network scope id of the transport zone.
        # To get the vdn_scope_id, in the vSphere web client from the Home
        # menu:
        #   1. click on Networking & Security
        #   2. click on installation
        #   3. click on the Logical Netowrk Preparation tab.
        #   4. click on the Transport Zones button.
        #   5. Double click on the transport zone being configure.
        #   6. Select Manage tab.
        #   7. The vdn_scope_id will appear at the end of the URL.
        vdn_scope_id: 'vdnscope-1'

        # (Optional) Dvs id for VLAN based networks. If not needed,
        # do not declare it.
        # dvs_id: 'dvs-68'

        # (Required) backup_edge_pool: backup edge pools management range,
        # - edge_type>[edge_size]:MINIMUM_POOLED_EDGES:MAXIMUM_POOLED_EDGES
        # - edge_type: service (service edge) or  vdr (distributed edge)
        # - edge_size:  compact ,  large (by default),  xlarge  or  quadlarge
        backup_edge_pool: 'service:compact:4:10,vdr:compact:4:10'

        # (Optional) mgt_net_proxy_ips: management network IP address for
        # metadata proxy. If not needed, do not declare it.
        # mgt_net_proxy_ips: '10.142.14.251,10.142.14.252'

        # (Optional) mgt_net_proxy_netmask: management network netmask for
        # metadata proxy. If not needed, do not declare it.
        # mgt_net_proxy_netmask: '255.255.255.0'

        # (Optional) mgt_net_moid: Network ID for management network connectivity
        # Do not declare if not used.
        # mgt_net_moid: 'dvportgroup-73'

        # ca_file: Name of the certificate file. If insecure is set to True,
        # then this parameter is ignored. If insecure is set to False and this
        # parameter is not defined, then the system root CAs will be used
        # to verify the server certificate.
        ca_file: a/nsx/certificate/file

        # insecure:
        # If true (default), the NSXv server certificate is not verified.
        # If false, then the default CA truststore is used for verification.
        # This option is ignored if "ca_file" is set
        insecure: True
        # (Optional) edge_ha: if true, will duplicate any edge pool resources
        # Default to False if undeclared.
        # edge_ha: False
        # (Optional) spoofguard_enabled:
        # If True (default), indicates NSXV spoofguard component is used to
        # implement port-security feature.
        # spoofguard_enabled: True
        # (Optional) exclusive_router_appliance_size:
        # Edge appliance size to be used for creating exclusive router.
        # Valid values: 'compact', 'large', 'xlarge', 'quadlarge'
        # Defaults to 'compact' if not declared.  # exclusive_router_appliance_size:
        'compact'
28.1.2.5.3.2.3 Commit Changes and Run the Configuration Processor

Commit your changes with the input model and the required configuration values added to the pass_through.yml and nsx/nsx_config.yml files.

ardana > cd ~/openstack/my_cloud/definition
ardana > git commit -A -m "Configuration changes for NSX deployment"
ardana > cd ~/openstack/ardana/ansible
ardana > ansible-playbook -i hosts/localhost config-processor-run.yml \
 -e \encrypt="" -e rekey=""

If the playbook config-processor-run.yml fails, there is an error in the input model. Fix the error and repeat the above steps.

28.1.2.5.3.3 Deploying the Operating System with Cobbler
  1. From the Cloud Lifecycle Manager, run Cobbler to install the operating system on the nodes after it has to be deployed:

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost cobbler-deploy.yml
  2. Verify the nodes that will have an operating system installed by Cobbler by running this command:

    tux > sudo cobbler system find --netboot-enabled=1
  3. Reimage the nodes using Cobbler. Do not use Cobbler to reimage the nodes running as ESX virtual machines. The command below is run on a setup where the nova ESX compute proxies are VMs. Controllers 1, 2, and 3 are running on physical servers.

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost bm-reimage.yml -e \
       nodelist=controller1,controller2,controller3
  4. When the playbook has completed, each controller node should have an operating system installed with an IP address configured on eth0.

  5. After your controller nodes have been completed, you should install the operating system on your nova compute proxy virtual machines. Each configured virtual machine should be able to PXE boot into the operating system installer.

  6. From within the vSphere environment, power on each nova compute proxy virtual machine and watch for it to PXE boot into the OS installer via its console.

    1. If successful, the virtual machine will have the operating system automatically installed and will then automatically power off.

    2. When the virtual machine has powered off, power it on and let it boot into the operating system.

  7. Verify network settings after deploying the operating system to each node.

    • Verify that the NIC bus mapping specified in the cloud model input file (~/ardana/my_cloud/definition/data/nic_mappings.yml) matches the NIC bus mapping on each OpenStack node.

      Check the NIC bus mapping with this command:

      tux > sudo cobbler system list
    • After the playbook has completed, each controller node should have an operating system installed with an IP address configured on eth0.

  8. When the ESX compute proxy nodes are VMs, install the operating system if you have not already done so.

28.1.2.5.3.4 Deploying the Cloud

When the configuration processor has completed successfully, the cloud can be deployed. Set the ARDANA_USER_PASSWORD_ENCRYPT_KEY environment variable before running site.yml.

ardana > cd ~/openstack/ardana/ansible
ardana > ansible-playbook -i hosts/localhost ready-deployment.yml
ardana > cd ~/scratch/ansible/next/ardana/ansible
ardana > export ARDANA_USER_PASSWORD_ENCRYPT_KEY=PASSWORD_KEY
ardana > ansible-playbook -i hosts/verb_hosts site.yml
ardana > ansible-playbook -i hosts/verb_hosts ardana-cloud-configure.yml

PASSWORD_KEY in the export command is the key used to encrypt the passwords for vCenter and NSX Manager.

28.2 Integrating with NSX for vSphere on Baremetal

This section describes the installation steps and requirements for integrating with NSX for vSphere on baremetal physical hardware.

28.2.1 Pre-Integration Checklist

The following installation and integration instructions assumes an understanding of VMware's ESXI and vSphere products for setting up virtual environments.

Please review the following requirements for the VMware vSphere environment.

Software Requirements

Before you install or upgrade NSX, verify your software versions. The following are the required versions.

Software

Version

SUSE OpenStack Cloud

8

VMware NSX-v Manager

6.3.4 or higher

VMWare NSX-v neutron Plugin

Pike Release (TAG=11.0.0)

VMWare ESXi and vSphere Appliance (vSphere web Client)

6.0 or higher

A vCenter server (appliance) is required to manage the vSphere environment. It is recommended that you install a vCenter appliance as an ESX virtual machine.

Important
Important

Each ESXi compute cluster is required to have shared storage between the hosts in the cluster, otherwise attempts to create instances through nova-compute will fail.

28.2.2 Installing on Baremetal

OpenStack can be deployed in two ways: on baremetal (physical hardware) or in an ESXi virtual environment on virtual machines. The following instructions describe how to install OpenStack on baremetal nodes with vCenter and NSX Manager running as virtual machines. For instructions on virtual machine installation, see Section 28.1, “Integrating with NSX for vSphere”.

This deployment example will consist of two ESXi clusters at minimum: a control-plane cluster and a compute cluster. The control-plane cluster must have 3 ESXi hosts minimum (due to VMware's recommendation that each NSX controller virtual machine is on a separate host). The compute cluster must have 2 ESXi hosts minimum. There can be multiple compute clusters. The following table outlines the virtual machine specifications to be built in the control-plane cluster:

Table 28.2: NSX Hardware Requirements for Baremetal Integration

Virtual Machine Role

Required Number

Disk

Memory

Network

CPU

Compute virtual machines

1 per compute cluster

80GB

4GB

3 VMXNET Virtual Network Adapters

2 vCPU

NSX Edge Gateway/DLR/Metadata-proxy appliances

Autogenerated by NSXv

Autogenerated by NSXv

Autogenerated by NSXv

Autogenerated by NSXv

In addition to the ESXi hosts, it is recommended that there is one physical host for the Cloud Lifecycle Manager node and three physical hosts for the controller nodes.

28.2.2.1 Network Requirements

NSX-v requires the following for networking:

  • The ESXi hosts, vCenter, and the NSX Manager appliance must resolve DNS lookup.

  • The ESXi host must have the NTP service configured and enabled.

  • Jumbo frames must be enabled on the switch ports that the ESXi hosts are connected to.

  • The ESXi hosts must have at least 2 physical network cards each.

28.2.2.2 Network Model

The model in these instructions requires the following networks:

ESXi Hosts and vCenter

This is the network that the ESXi hosts and vCenter use to route traffic with.

NSX Management

The network which the NSX controllers and NSX Manager will use.

NSX VTEP Pool

The network that NSX uses to create endpoints for VxLAN tunnels.

Management

The network that OpenStack uses for deployment and maintenance of the cloud.

Internal API (optional)

The network group that will be used for management (private API) traffic within the cloud.

External API

This is the network that users will use to make requests to the cloud.

External VM

VLAN-backed provider network for external access to guest VMs (floating IPs).

28.2.2.3 vSphere port security settings

Even though the OpenStack deployment is on baremetal, it is still necessary to define each VLAN within a vSphere Distributed Switch for the nova compute proxy virtual machine. Therefore, the vSphere port security settings are shown in the table below.

Network Group

VLAN Type

Interface

vSphere Port Group Security Settings

IPMI

Untagged

N/A

N/A

ESXi Hosts and vCenter

Tagged

N/A

Defaults

NSX Manager

Must be able to reach ESXi Hosts and vCenter

Tagged

N/A

Defaults

NSX VTEP Pool

Tagged

N/A

Defaults

Management

Tagged or Untagged

bond0

  • Promiscuous Mode: Accept

  • MAC Address Changes: Reject

  • Forged Transmits:Reject

Internal API (Optional, may be combined with the Management Network. If network segregation is required for security reasons, you can keep this as a separate network.)

Tagged

bond0

  • Promiscuous Mode: Accept

  • MAC Address Changes: Reject

  • Forged Transmits: Accept

External API (Public)

Tagged

N/A

  • Promiscuous Mode: Accept

  • MAC Address Changes: Reject

  • Forged Transmits: Accept

External VM

Tagged

N/A

  • Promiscuous Mode: Accept

  • MAC Address Changes: Reject

  • Forged Transmits: Accept

28.2.2.4 Configuring the vSphere Environment

Before deploying OpenStack with NSX-v, the VMware vSphere environment must be properly configured, including setting up vSphere distributed switches and port groups. For detailed instructions, see Chapter 27, Installing ESX Computes and OVSvAPP.

Installing and configuring the VMware NSX Manager and creating the NSX network within the vSphere environment is covered below.

Before proceeding with the installation, ensure that the following are configured in the vSphere environment.

  • The vSphere datacenter is configured with at least two clusters, one control-plane cluster and one compute cluster.

  • Verify that all software, hardware, and networking requirements have been met.

  • Ensure the vSphere distributed virtual switches (DVS) are configured for each cluster.

Note
Note

The MTU setting for each DVS should be set to 1600. NSX should automatically apply this setting to each DVS during the setup process. Alternatively, the setting can be manually applied to each DVS before setup if desired.

Make sure there is a copy of the SUSE Linux Enterprise Server 12 SP4 .iso in the ardana home directory, var/lib/ardana, and that it is called sles12sp4.iso.

Install the open-vm-tools package.

tux > sudo zypper install open-vm-tools
28.2.2.4.1 Install NSX Manager

The NSX Manager is the centralized network management component of NSX. It provides a single point of configuration and REST API entry-points.

The NSX Manager is installed as a virtual appliance on one of the ESXi hosts within the vSphere environment. This guide will cover installing the appliance on one of the ESXi hosts within the control-plane cluster. For more detailed information, refer to VMware's NSX Installation Guide.

To install the NSX Manager, download the virtual appliance from VMware and deploy the appliance within vCenter onto one of the ESXi hosts. For information on deploying appliances within vCenter, refer to VMware's documentation for ESXi 5.5 or 6.0.

During the deployment of the NSX Manager appliance, be aware of the following:

When prompted, select Accept extra configuration options. This will present options for configuring IPv4 and IPv6 addresses, the default gateway, DNS, NTP, and SSH properties during the installation, rather than configuring these settings manually after the installation.

  • Choose an ESXi host that resides within the control-plane cluster.

  • Ensure that the network mapped port group is the DVS port group that represents the VLAN the NSX Manager will use for its networking (in this example it is labeled as the NSX Management network).

Note
Note

The IP address assigned to the NSX Manager must be able to resolve reverse DNS.

Power on the NSX Manager virtual machine after it finishes deploying and wait for the operating system to fully load. When ready, carry out the following steps to have the NSX Manager use single sign-on (SSO) and to register the NSX Manager with vCenter:

  1. Open a web browser and enter the hostname or IP address that was assigned to the NSX Manager during setup.

  2. Log in with the username admin and the password set during the deployment.

  3. After logging in, click on Manage vCenter Registration.

  4. Configure the NSX Manager to connect to the vCenter server.

  5. Configure NSX manager for single sign on (SSO) under the Lookup Server URL section.

Note
Note

When configuring SSO, use Lookup Service Port 443 for vCenter version 6.0. Use Lookup Service Port 7444 for vCenter version 5.5.

SSO makes vSphere and NSX more secure by allowing the various components to communicate with each other through a secure token exchange mechanism, instead of requiring each component to authenticate a user separately. For more details, refer to VMware's documentation on Configure Single Sign-On.

Both the Lookup Service URL and the vCenter Server sections should have a status of connected when configured properly.

Log into the vSphere Web Client (log out and and back in if already logged in). The NSX Manager will appear under the Networking & Security section of the client.

Note
Note

The Networking & Security section will not appear under the vSphere desktop client. Use of the web client is required for the rest of this process.

28.2.2.4.2 Add NSX Controllers

The NSX controllers serve as the central control point for all logical switches within the vSphere environment's network, and they maintain information about all hosts, logical switches (VXLANs), and distributed logical routers.

NSX controllers will each be deployed as a virtual appliance on the ESXi hosts within the control-plane cluster to form the NSX Controller cluster. For details about NSX controllers and the NSX control plane in general, refer to VMware's NSX documentation.

Important
Important

Whatever the size of the NSX deployment, the following conditions must be met:

  • Each NSX Controller cluster must contain three controller nodes. Having a different number of controller nodes is not supported.

  • Before deploying NSX Controllers, you must deploy an NSX Manager appliance and register vCenter with NSX Manager.

  • Determine the IP pool settings for your controller cluster, including the gateway and IP address range. DNS settings are optional.

  • The NSX Controller IP network must have connectivity to the NSX Manager and to the management interfaces on the ESXi hosts.

Log in to the vSphere web client and do the following steps to add the NSX controllers:

  1. In vCenter, navigate to Home, select Networking & Security › Installation, and then select the Management tab.

  2. In the NSX Controller nodes section, click the Add Node icon represented by a green plus sign.

  3. Enter the NSX Controller settings appropriate to your environment. If you are following this example, use the control-plane clustered ESXi hosts and control-plane DVS port group for the controller settings.

  4. If it has not already been done, create an IP pool for the NSX Controller cluster with at least three IP addressess by clicking New IP Pool. Individual controllers can be in separate IP subnets, if necessary.

  5. Click OK to deploy the controller. After the first controller is completely deployed, deploy two additional controllers.

Important
Important

Three NSX controllers is mandatory. VMware recommends configuring a DRS anti-affinity rule to prevent the controllers from residing on the same ESXi host. See more information about DRS Affinity Rules.

28.2.2.4.3 Prepare Clusters for NSX Management

During Host Preparation, the NSX Manager:

  • Installs the NSX kernel modules on ESXi hosts that are members of vSphere clusters

  • Builds the NSX control-plane and management-plane infrastructure

The NSX kernel modules are packaged in VIB (vSphere Installation Bundle) files. They run within the hypervisor kernel and provide services such as distributed routing, distributed firewall, and VXLAN bridging capabilities. These files are installed on a per-cluster level, and the setup process deploys the required software on all ESXi hosts in the target cluster. When a new ESXi host is added to the cluster, the required software is automatically installed on the newly added host.

Before beginning the NSX host preparation process, make sure of the following in your environment:

  • Register vCenter with NSX Manager and deploy the NSX controllers.

  • Verify that DNS reverse lookup returns a fully qualified domain name when queried with the IP address of NSX Manager.

  • Verify that the ESXi hosts can resolve the DNS name of vCenter server.

  • Verify that the ESXi hosts can connect to vCenter Server on port 80.

  • Verify that the network time on vCenter Server and the ESXi hosts is synchronized.

  • For each vSphere cluster that will participate in NSX, verify that the ESXi hosts within each respective cluster are attached to a common VDS.

    For example, given a deployment with two clusters named Host1 and Host2. Host1 is attached to VDS1 and VDS2. Host2 is attached to VDS1 and VDS3. When you prepare a cluster for NSX, you can only associate NSX with VDS1 on the cluster. If you add another host (Host3) to the cluster and Host3 is not attached to VDS1, it is an invalid configuration, and Host3 will not be ready for NSX functionality.

  • If you have vSphere Update Manager (VUM) in your environment, you must disable it before preparing clusters for network virtualization. For information on how to check if VUM is enabled and how to disable it if necessary, see the VMware knowledge base.

  • In the vSphere web client, ensure that the cluster is in the resolved state (listed under the Host Preparation tab). If the Resolve option does not appear in the cluster's Actions list, then it is in a resolved state.

To prepare the vSphere clusters for NSX:

  1. In vCenter, select Home › Networking & Security › Installation, and then select the Host Preparation tab.

  2. Continuing with the example in these instructions, click on the Actions button (gear icon) and select Install for both the control-plane cluster and compute cluster (if you are using something other than this example, then only install on the clusters that require NSX logical switching, routing, and firewalls).

  3. Monitor the installation until the Installation Status column displays a green check mark.

    Important
    Important

    While installation is in progress, do not deploy, upgrade, or uninstall any service or component.

    Important
    Important

    If the Installation Status column displays a red warning icon and says Not Ready, click Resolve. Clicking Resolve might result in a reboot of the host. If the installation is still not successful, click the warning icon. All errors will be displayed. Take the required action and click Resolve again.

  4. To verify the VIBs (esx-vsip and esx-vxlan) are installed and registered, SSH into an ESXi host within the prepared cluster. List the names and versions of the VIBs installed by running the following command:

    tux > esxcli software vib list | grep esx
    ...
    esx-vsip      6.0.0-0.0.2732470    VMware  VMwareCertified   2015-05-29
    esx-vxlan     6.0.0-0.0.2732470    VMware  VMwareCertified   2015-05-29
    ...
Important
Important

After host preparation:

  • A host reboot is not required

  • If you add a host to a prepared cluster, the NSX VIBs are automatically installed on the host.

  • If you move a host to an unprepared cluster, the NSX VIBs are automatically uninstalled from the host. In this case, a host reboot is required to complete the uninstall process.

28.2.2.4.4 Configure VXLAN Transport Parameters

VXLAN is configured on a per-cluster basis, where each vSphere cluster that is to participate in NSX is mapped to a vSphere Distributed Virtual Switch (DVS). When mapping a vSphere cluster to a DVS, each ESXi host in that cluster is enabled for logical switches. The settings chosen in this section will be used in creating the VMkernel interface.

Configuring transport parameters involves selecting a DVS, a VLAN ID, an MTU size, an IP addressing mechanism, and a NIC teaming policy. The MTU for each switch must be set to 1550 or higher. By default, it is set to 1600 by NSX. This is also the recommended setting for integration with OpenStack.

To configure the VXLAN transport parameters:

  1. In the vSphere web client, navigate to Home › Networking & Security › Installation.

  2. Select the Host Preparation tab.

  3. Click the Configure link in the VXLAN column.

  4. Enter the required information.

  5. If you have not already done so, create an IP pool for the VXLAN tunnel end points (VTEP) by clicking New IP Pool:

  6. Click OK to create the VXLAN network.

When configuring the VXLAN transport network, consider the following:

  • Use a NIC teaming policy that best suits the environment being built. Load Balance - SRCID as the VMKNic teaming policy is usually the most flexible out of all the available options. This allows each host to have a VTEP vmkernel interface for each dvuplink on the selected distributed switch (two dvuplinks gives two VTEP interfaces per ESXi host).

  • Do not mix different teaming policies for different portgroups on a VDS where some use Etherchannel or Link Aggregation Control Protocol (LACPv1 or LACPv2) and others use a different teaming policy. If uplinks are shared in these different teaming policies, traffic will be interrupted. If logical routers are present, there will be routing problems. Such a configuration is not supported and should be avoided.

  • For larger environments it may be better to use DHCP for the VMKNic IP Addressing.

  • For more information and further guidance, see the VMware NSX for vSphere Network Virtualization Design Guide.

28.2.2.4.5 Assign Segment ID Pool

Each VXLAN tunnel will need a segment ID to isolate its network traffic. Therefore, it is necessary to configure a segment ID pool for the NSX VXLAN network to use. If an NSX controller is not deployed within the vSphere environment, a multicast address range must be added to spread traffic across the network and avoid overloading a single multicast address.

For the purposes of the example in these instructions, do the following steps to assign a segment ID pool. Otherwise, follow best practices as outlined in VMware's documentation.

  1. In the vSphere web client, navigate to Home › Networking & Security › Installation.

  2. Select the Logical Network Preparation tab.

  3. Click Segment ID, and then Edit.

  4. Click OK to save your changes.

28.2.2.4.6 Assign Segment ID Pool

Each VXLAN tunnel will need a segment ID to isolate its network traffic. Therefore, it is necessary to configure a segment ID pool for the NSX VXLAN network to use. If an NSX controller is not deployed within the vSphere environment, a multicast address range must be added to spread traffic across the network and avoid overloading a single multicast address.

For the purposes of the example in these instructions, do the following steps to assign a segment ID pool. Otherwise, follow best practices as outlined in VMware's documentation.

  1. In the vSphere web client, navigate to Home › Networking & Security › Installation.

  2. Select the Logical Network Preparation tab.

  3. Click Segment ID, and then Edit.

  4. Click OK to save your changes.

28.2.2.4.7 Create a Transport Zone

A transport zone controls which hosts a logical switch can reach and has the following characteristics.

  • It can span one or more vSphere clusters.

  • Transport zones dictate which clusters can participate in the use of a particular network. Therefore they dictate which VMs can participate in the use of a particular network.

  • A vSphere NSX environment can contain one or more transport zones based on the environment's requirements.

  • A host cluster can belong to multiple transport zones.

  • A logical switch can belong to only one transport zone.

Note
Note

OpenStack has only been verified to work with a single transport zone within a vSphere NSX-v environment. Other configurations are currently not supported.

For more information on transport zones, refer to VMware's Add A Transport Zone.

To create a transport zone:

  1. In the vSphere web client, navigate to Home › Networking & Security › Installation.

  2. Select the Logical Network Preparation tab.

  3. Click Transport Zones, and then click the New Transport Zone (New Logical Switch) icon.

  4. In the New Transport Zone dialog box, type a name and an optional description for the transport zone.

  5. For these example instructions, select the control plane mode as Unicast.

    Note
    Note

    Whether there is a controller in the environment or if the environment is going to use multicast addresses will determine the control plane mode to select:

    • Unicast (what this set of instructions uses): The control plane is handled by an NSX controller. All unicast traffic leverages optimized headend replication. No multicast IP addresses or special network configuration is required.

    • Multicast: Multicast IP addresses in the physical network are used for the control plane. This mode is recommended only when upgrading from older VXLAN deployments. Requires PIM/IGMP in the physical network.

    • Hybrid: Offloads local traffic replication to the physical network (L2 multicast). This requires IGMP snooping on the first-hop switch and access to an IGMP querier in each VTEP subnet, but does not require PIM. The first-hop switch handles traffic replication for the subnet.

  6. Select the clusters to be added to the transport zone.

  7. Click OK to save your changes.

28.2.2.4.8 Deploying SUSE OpenStack Cloud

With vSphere environment setup completed, the OpenStack can be deployed. The following sections will cover creating virtual machines within the vSphere environment, configuring the cloud model and integrating NSX-v neutron core plugin into the OpenStack:

  1. Create the virtual machines

  2. Deploy the Cloud Lifecycle Manager

  3. Configure the neutron environment with NSX-v

  4. Modify the cloud input model

  5. Set up the parameters

  6. Deploy the Operating System with Cobbler

  7. Deploy the cloud

28.2.2.4.9 Deploying SUSE OpenStack Cloud on Baremetal

Within the vSphere environment, create the OpenStack compute proxy virtual machines. There needs to be one neutron compute proxy virtual machine per ESXi compute cluster.

For the minimum NSX hardware requirements, refer to Table 28.2, “NSX Hardware Requirements for Baremetal Integration”. Also be aware of the networking model to use for the VM network interfaces, see Table 28.3, “NSX Interface Requirements”:

If ESX VMs are to be used as nova compute proxy nodes, set up three LAN interfaces in each virtual machine as shown in the table below. There is at least one nova compute proxy node per cluster.

Table 28.3: NSX Interface Requirements

Network Group

Interface

Management

eth0

External API

eth1

Internal API

eth2

28.2.2.4.9.1 Advanced Configuration Option
Important
Important

Within vSphere for each in the virtual machine:

  • In the Options section, under Advanced configuration parameters, ensure that disk.EnableUUIDoption is set to true.

  • If the option does not exist, it must be added. This option is required for the OpenStack deployment.

  • If the option is not specified, then the deployment will fail when attempting to configure the disks of each virtual machine.

28.2.2.4.9.2 Setting Up the Cloud Lifecycle Manager
28.2.2.4.9.2.1 Installing the Cloud Lifecycle Manager

Running the ARDANA_INIT_AUTO=1 command is optional to avoid stopping for authentication at any step. You can also run ardana-initto launch the Cloud Lifecycle Manager. You will be prompted to enter an optional SSH passphrase, which is used to protect the key used by Ansible when connecting to its client nodes. If you do not want to use a passphrase, press Enter at the prompt.

If you have protected the SSH key with a passphrase, you can avoid having to enter the passphrase on every attempt by Ansible to connect to its client nodes with the following commands:

ardana > eval $(ssh-agent)
ardana > ssh-add ~/.ssh/id_rsa

The Cloud Lifecycle Manager will contain the installation scripts and configuration files to deploy your cloud. You can set up the Cloud Lifecycle Manager on a dedicated node or you do so on your first controller node. The default choice is to use the first controller node as the Cloud Lifecycle Manager.

  1. Download the product from:

    1. SUSE Customer Center

  2. Boot your Cloud Lifecycle Manager from the SLES ISO contained in the download.

  3. Enter install (all lower-case, exactly as spelled out here) to start installation.

  4. Select the language. Note that only the English language selection is currently supported.

  5. Select the location.

  6. Select the keyboard layout.

  7. Select the primary network interface, if prompted:

    1. Assign IP address, subnet mask, and default gateway

  8. Create new account:

    1. Enter a username.

    2. Enter a password.

    3. Enter time zone.

Once the initial installation is finished, complete the Cloud Lifecycle Manager setup with these steps:

  1. Ensure your Cloud Lifecycle Manager has a valid DNS nameserver specified in /etc/resolv.conf.

  2. Set the environment variable LC_ALL:

    export LC_ALL=C
    Note
    Note

    This can be added to ~/.bashrc or /etc/bash.bashrc.

The node should now have a working SLES setup.

28.2.2.4.9.3 Configure the Neutron Environment with NSX-v

In summary, integrating NSX with vSphere has four major steps:

  1. Modify the input model to define the server roles, servers, network roles and networks. Section 28.1.2.5.3.2, “Modify the Input Model”

  2. Set up the parameters needed for neutron and nova to communicate with the ESX and NSX Manager. Section 28.1.2.5.3.3, “Deploying the Operating System with Cobbler”

  3. Do the steps to deploy the cloud. Section 28.1.2.5.3.4, “Deploying the Cloud”

28.2.2.4.9.3.1 Third-Party Import of VMware NSX-v Into neutron and python-neutronclient

To import the NSX-v neutron core-plugin into Cloud Lifecycle Manager, run the third-party import playbook.

ardana > cd ~/openstack/ardana/ansible
ardana > ansible-playbook -i hosts/localhost third-party-import.yml
28.2.2.4.9.3.2 Modify the Input Model

After the third-party import has completed successfully, modify the input model:

  1. Prepare for input model changes

  2. Define the servers and server roles needed for a NSX-v cloud.

  3. Define the necessary networks and network groups

  4. Specify the services needed to be deployed on the Cloud Lifecycle Manager controllers and the nova ESX compute proxy nodes.

  5. Commit the changes and run the configuration processor.

28.2.2.4.9.3.2.1 Prepare for Input Model Changes

The previous steps created a modified SUSE OpenStack Cloud tarball with the NSX-v core plugin in the neutron and neutronclient venvs. The tar file can now be extracted and the ardana-init.bash script can be run to set up the deployment files and directories. If a modified tar file was not created, then extract the tar from the /media/cdrom/ardana location.

To run the ardana-init.bash script which is included in the build, use this commands:

ardana > ~/ardana/hos-init.bash
28.2.2.4.9.3.2.2 Create the Input Model

Copy the example input model to ~/openstack/my_cloud/definition/ directory:

ardana > cd ~/ardana-extensions/ardana-extensions-nsx/vmware/examples/models/entry-scale-nsx
ardana > cp -R entry-scale-nsx ~/openstack/my_cloud/definition

Refer to the reference input model in ardana-extensions/ardana-extensions-nsx/vmware/examples/models/entry-scale-nsx/ for details about how these definitions should be made. The main differences between this model and the standard Cloud Lifecycle Manager input models are:

  • Only the neutron-server is deployed. No other neutron agents are deployed.

  • Additional parameters need to be set in pass_through.yml and nsx/nsx_config.yml.

  • nova ESX compute proxy nodes may be ESX virtual machines.

28.2.2.4.9.3.2.2.1 Set up the Parameters

The special parameters needed for the NSX-v integrations are set in the files pass_through.yml and nsx/nsx_config.yml. They are in the ~/openstack/my_cloud/definition/data directory.

Parameters in pass_through.yml are in the sample input model in the ardana-extensions/ardana-extensions-nsx/vmware/examples/models/entry-scale-nsx/ directory. The comments in the sample input model file describe how to locate the values of the required parameters.

#
# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
product:
  version: 2
pass-through:
  global:
    vmware:
      - username: VCENTER_ADMIN_USERNAME
        ip: VCENTER_IP
        port: 443
        cert_check: false
        # The password needs to be encrypted using the script
        # openstack/ardana/ansible/ardanaencrypt.py on the deployer:
        #
        # $ cd ~/openstack/ardana/ansible
        # $ export ARDANA_USER_PASSWORD_ENCRYPT_KEY=ENCRYPTION_KEY
        # $ ./ardanaencrypt.py
        #
        # The script will prompt for the vCenter password. The string
        # generated is the encrypted password. Enter the string
        # enclosed by double-quotes below.
        password: "ENCRYPTED_PASSWD_FROM_ARDANAENCRYPT"

        # The id is is obtained by the URL
        # https://VCENTER_IP/mob/?moid=ServiceInstance&doPath=content%2eabout,
        # field instanceUUID.
        id: VCENTER_UUID
  servers:
    -
      # Here the 'id' refers to the name of the node running the
      # esx-compute-proxy. This is identical to the 'servers.id' in
      # servers.yml. There should be one esx-compute-proxy node per ESX
      # resource pool.
      id: esx-compute1
      data:
        vmware:
          vcenter_cluster: VMWARE_CLUSTER1_NAME
          vcenter_id: VCENTER_UUID
    -
      id: esx-compute2
      data:
        vmware:
          vcenter_cluster: VMWARE_CLUSTER2_NAME
          vcenter_id: VCENTER_UUID

There are parameters in nsx/nsx_config.yml. The comments describes how to retrieve the values.

# (c) Copyright 2017 SUSE LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
---
  product:
    version: 2
  configuration-data:
    - name: NSX-CONFIG-CP1
      services:
        - nsx
      data:
        # (Required) URL for NSXv manager (e.g - https://management_ip).
        manager_uri: 'https://NSX_MGR_IP

        # (Required) NSXv username.
        user: 'admin'

        # (Required) Encrypted NSX Manager password.
        # Password encryption is done by the script
        # ~/openstack/ardana/ansible/ardanaencrypt.py on the deployer:
        #
        # $ cd ~/openstack/ardana/ansible
        # $ export ARDANA_USER_PASSWORD_ENCRYPT_KEY=ENCRYPTION_KEY
        # $ ./ardanaencrypt.py
        #
        # NOTE: Make sure that the NSX Manager password is encrypted with the same key
        # used to encrypt the VCenter password.
        #
        # The script will prompt for the NSX Manager password. The string
        # generated is the encrypted password. Enter the string enclosed
        # by double-quotes below.
        password: "ENCRYPTED_NSX_MGR_PASSWD_FROM_ARDANAENCRYPT"
        # (Required) datacenter id for edge deployment.
        # Retrieved using
        #    http://VCENTER_IP_ADDR/mob/?moid=ServiceInstance&doPath=content
        # click on the value from the rootFolder property. The datacenter_moid is
        # the value of the childEntity property.
        # The vCenter-ip-address comes from the file pass_through.yml in the
        # input model under "pass-through.global.vmware.ip".
        datacenter_moid: 'datacenter-21'
        # (Required) id of logic switch for physical network connectivity.
        # How to retrieve
        # 1. Get to the same page where the datacenter_moid is found.
        # 2. Click on the value of the rootFolder property.
        # 3. Click on the value of the childEntity property
        # 4. Look at the network property. The external network is
        #    network associated with EXTERNAL VM in VCenter.
        external_network: 'dvportgroup-74'
        # (Required) clusters ids containing OpenStack hosts.
        # Retrieved using http://VCENTER_IP_ADDR/mob, click on the value
        # from the rootFolder property. Then click on the value of the
        # hostFolder property. Cluster_moids are the values under childEntity
        # property of the compute clusters.
        cluster_moid: 'domain-c33,domain-c35'
        # (Required) resource-pool id for edge deployment.
        resource_pool_id: 'resgroup-67'
        # (Optional) datastore id for edge deployment. If not needed,
        # do not declare it.
        # datastore_id: 'datastore-117'

        # (Required) network scope id of the transport zone.
        # To get the vdn_scope_id, in the vSphere web client from the Home
        # menu:
        #   1. click on Networking & Security
        #   2. click on installation
        #   3. click on the Logical Netowrk Preparation tab.
        #   4. click on the Transport Zones button.
        #   5. Double click on the transport zone being configure.
        #   6. Select Manage tab.
        #   7. The vdn_scope_id will appear at the end of the URL.
        vdn_scope_id: 'vdnscope-1'

        # (Optional) Dvs id for VLAN based networks. If not needed,
        # do not declare it.
        # dvs_id: 'dvs-68'

        # (Required) backup_edge_pool: backup edge pools management range,
        # - edge_type>[edge_size]:MINIMUM_POOLED_EDGES:MAXIMUM_POOLED_EDGES
        # - edge_type: service (service edge) or  vdr (distributed edge)
        # - edge_size:  compact ,  large (by default),  xlarge  or  quadlarge
        backup_edge_pool: 'service:compact:4:10,vdr:compact:4:10'

        # (Optional) mgt_net_proxy_ips: management network IP address for
        # metadata proxy. If not needed, do not declare it.
        # mgt_net_proxy_ips: '10.142.14.251,10.142.14.252'

        # (Optional) mgt_net_proxy_netmask: management network netmask for
        # metadata proxy. If not needed, do not declare it.
        # mgt_net_proxy_netmask: '255.255.255.0'

        # (Optional) mgt_net_moid: Network ID for management network connectivity
        # Do not declare if not used.
        # mgt_net_moid: 'dvportgroup-73'

        # ca_file: Name of the certificate file. If insecure is set to True,
        # then this parameter is ignored. If insecure is set to False and this
        # parameter is not defined, then the system root CAs will be used
        # to verify the server certificate.
        ca_file: a/nsx/certificate/file

        # insecure:
        # If true (default), the NSXv server certificate is not verified.
        # If false, then the default CA truststore is used for verification.
        # This option is ignored if "ca_file" is set
        insecure: True
        # (Optional) edge_ha: if true, will duplicate any edge pool resources
        # Default to False if undeclared.
        # edge_ha: False
        # (Optional) spoofguard_enabled:
        # If True (default), indicates NSXV spoofguard component is used to
        # implement port-security feature.
        # spoofguard_enabled: True
        # (Optional) exclusive_router_appliance_size:
        # Edge appliance size to be used for creating exclusive router.
        # Valid values: 'compact', 'large', 'xlarge', 'quadlarge'
        # Defaults to 'compact' if not declared.  # exclusive_router_appliance_size:
        'compact'
28.2.2.4.9.3.2.3 Commit Changes and Run the Configuration Processor

Commit your changes with the input model and the required configuration values added to the pass_through.yml and nsx/nsx_config.yml files.

ardana > cd ~/openstack/my_cloud/definition
ardana > git commit -A -m "Configuration changes for NSX deployment"
ardana > cd ~/openstack/ardana/ansible
ardana > ansible-playbook -i hosts/localhost config-processor-run.yml \
 -e \encrypt="" -e rekey=""

If the playbook config-processor-run.yml fails, there is an error in the input model. Fix the error and repeat the above steps.

28.2.2.4.9.3.3 Deploying the Operating System with Cobbler
  1. From the Cloud Lifecycle Manager, run Cobbler to install the operating system on the nodes after it has to be deployed:

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost cobbler-deploy.yml
  2. Verify the nodes that will have an operating system installed by Cobbler by running this command:

    tux > sudo cobbler system find --netboot-enabled=1
  3. Reimage the nodes using Cobbler. Do not use Cobbler to reimage the nodes running as ESX virtual machines. The command below is run on a setup where the nova ESX compute proxies are VMs. Controllers 1, 2, and 3 are running on physical servers.

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost bm-reimage.yml -e \
       nodelist=controller1,controller2,controller3
  4. When the playbook has completed, each controller node should have an operating system installed with an IP address configured on eth0.

  5. After your controller nodes have been completed, you should install the operating system on your nova compute proxy virtual machines. Each configured virtual machine should be able to PXE boot into the operating system installer.

  6. From within the vSphere environment, power on each nova compute proxy virtual machine and watch for it to PXE boot into the OS installer via its console.

    1. If successful, the virtual machine will have the operating system automatically installed and will then automatically power off.

    2. When the virtual machine has powered off, power it on and let it boot into the operating system.

  7. Verify network settings after deploying the operating system to each node.

    • Verify that the NIC bus mapping specified in the cloud model input file (~/ardana/my_cloud/definition/data/nic_mappings.yml) matches the NIC bus mapping on each OpenStack node.

      Check the NIC bus mapping with this command:

      tux > sudo cobbler system list
    • After the playbook has completed, each controller node should have an operating system installed with an IP address configured on eth0.

  8. When the ESX compute proxy nodes are VMs, install the operating system if you have not already done so.

28.2.2.4.9.3.4 Deploying the Cloud

When the configuration processor has completed successfully, the cloud can be deployed. Set the ARDANA_USER_PASSWORD_ENCRYPT_KEY environment variable before running site.yml.

ardana > cd ~/openstack/ardana/ansible
ardana > ansible-playbook -i hosts/localhost ready-deployment.yml
ardana > cd ~/scratch/ansible/next/ardana/ansible
ardana > export ARDANA_USER_PASSWORD_ENCRYPT_KEY=PASSWORD_KEY
ardana > ansible-playbook -i hosts/verb_hosts site.yml
ardana > ansible-playbook -i hosts/verb_hosts ardana-cloud-configure.yml

PASSWORD_KEY in the export command is the key used to encrypt the passwords for vCenter and NSX Manager.

28.3 Verifying the NSX-v Functionality After Integration

After you have completed your OpenStack deployment and integrated the NSX-v neutron plugin, you can use these steps to verify that NSX-v is enabled and working in the environment.

  1. Validating neutron from the Cloud Lifecycle Manager. All of these commands require that you authenticate by service.osrc file.

    ardana > source ~/service.osrc
  2. List your neutron networks:

    ardana > openstack network list
    +--------------------------------------+----------------+-------------------------------------------------------+
    | id                                   | name           | subnets                                               |
    +--------------------------------------+----------------+-------------------------------------------------------+
    | 574d5f6c-871e-47f8-86d2-4b7c33d91002 | inter-edge-net | c5e35e22-0c1c-4886-b7f3-9ce3a6ab1512 169.254.128.0/17 |
    +--------------------------------------+----------------+-------------------------------------------------------+
  3. List your neutron subnets:

    ardana > openstack subnet list
    +--------------------------------------+-------------------+------------------+------------------------------------------------------+
    | id                                   | name              | cidr             | allocation_pools                                     |
    +--------------------------------------+-------------------+------------------+------------------------------------------------------+
    | c5e35e22-0c1c-4886-b7f3-9ce3a6ab1512 | inter-edge-subnet | 169.254.128.0/17 | {"start": "169.254.128.2", "end": "169.254.255.254"} |
    +--------------------------------------+-------------------+------------------+------------------------------------------------------+
  4. List your neutron routers:

    ardana > openstack router list
    +--------------------------------------+-----------------------+-----------------------+-------------+
    | id                                   | name                  | external_gateway_info | distributed |
    +--------------------------------------+-----------------------+-----------------------+-------------+
    | 1c5bf781-5120-4b7e-938b-856e23e9f156 | metadata_proxy_router | null                  | False       |
    | 8b5d03bf-6f77-4ea9-bb27-87dd2097eb5c | metadata_proxy_router | null                  | False       |
    +--------------------------------------+-----------------------+-----------------------+-------------+
  5. List your neutron ports:

    ardana > openstack port list
    +--------------------------------------+------+-------------------+------------------------------------------------------+
    | id                                   | name | mac_address       | fixed_ips                                            |
    +--------------------------------------+------+-------------------+------------------------------------------------------+
    | 7f5f0461-0db4-4b9a-a0c6-faa0010b9be2 |      | fa:16:3e:e5:50:d4 | {"subnet_id":                                        |
    |                                      |      |                   | "c5e35e22-0c1c-4886-b7f3-9ce3a6ab1512",              |
    |                                      |      |                   | "ip_address": "169.254.128.2"}                       |
    | 89f27dff-f38d-4084-b9b0-ded495255dcb |      | fa:16:3e:96:a0:28 | {"subnet_id":                                        |
    |                                      |      |                   | "c5e35e22-0c1c-4886-b7f3-9ce3a6ab1512",              |
    |                                      |      |                   | "ip_address": "169.254.128.3"}                       |
    +--------------------------------------+------+-------------------+------------------------------------------------------+
  6. List your neutron security group rules:

    ardana > openstack security group rule list
    +--------------------------------------+----------------+-----------+-----------+---------------+-----------------+
    | id                                   | security_group | direction | ethertype | protocol/port | remote          |
    +--------------------------------------+----------------+-----------+-----------+---------------+-----------------+
    | 0385bd3a-1050-4bc2-a212-22ddab00c488 | default        | egress    | IPv6      | any           | any             |
    | 19f6f841-1a9a-4b4b-bc45-7e8501953d8f | default        | ingress   | IPv6      | any           | default (group) |
    | 1b3b5925-7aa6-4b74-9df0-f417ee6218f1 | default        | egress    | IPv4      | any           | any             |
    | 256953cc-23d7-404d-b140-2600d55e44a2 | default        | ingress   | IPv4      | any           | default (group) |
    | 314c4e25-5822-44b4-9d82-4658ae87d93f | default        | egress    | IPv6      | any           | any             |
    | 59d4a71e-9f99-4b3b-b75b-7c9ad34081e0 | default        | ingress   | IPv6      | any           | default (group) |
    | 887e25ef-64b7-4b69-b301-e053f88efa6c | default        | ingress   | IPv4      | any           | default (group) |
    | 949e9744-75cd-4ae2-8cc6-6c0f578162d7 | default        | ingress   | IPv4      | any           | default (group) |
    | 9a83027e-d6d6-4b6b-94fa-7c0ced2eba37 | default        | egress    | IPv4      | any           | any             |
    | abf63b79-35ad-428a-8829-8e8d796a9917 | default        | egress    | IPv4      | any           | any             |
    | be34b72b-66b6-4019-b782-7d91674ca01d | default        | ingress   | IPv6      | any           | default (group) |
    | bf3d87ce-05c8-400d-88d9-a940e43760ca | default        | egress    | IPv6      | any           | any             |
    +--------------------------------------+----------------+-----------+-----------+---------------+-----------------+

Verify metadata proxy functionality

To test that the metadata proxy virtual machines are working as intended, verify that there are at least two metadata proxy virtual machines from within vSphere (there will be four if edge high availability was set to true).

When that is verified, create a new compute instance either with the API, CLI, or within the cloud console GUI and log into the instance. From within the instance, using curl, grab the metadata instance-id from the metadata proxy address.

ardana > curl http://169.254.169.254/latest/meta-data/instance-id
i-00000004