Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE OpenStack Cloud 8

6 Creating a Highly Available Router Edit source

6.1 CVR and DVR High Available Routers Edit source

CVR (Centralized Virtual Routing) and DVR (Distributed Virtual Routing) are two types of technologies which can be used to provide routing processes in SUSE OpenStack Cloud 8. You can create Highly Available (HA) versions of CVR and DVR routers by using the options in the table below when creating your router.

The neutron command for creating a router neutron router-create router_name --distributed=True|False --ha=True|False requires administrative permissions. See the example in the next section, Creating a High Availability Router.

--distributed--haRouter TypeDescription
FalseFalseCVRCentralized Virtual Router
FalseTrueCVRHACentralized Virtual Router with L3 High Availablity
TrueFalseDVRDistributed Virtual Router without SNAT High Availability
TrueTrueDVRHADistributed Virtual Router with SNAT High Availability

6.2 Creating a High Availability Router Edit source

You can create a highly available router using the neutron command line interface.

  1. To create the HA router simply add --ha=True to the neutron router-create command. If you want to also make the router distributed, add --distributed=True. In this example, a DVR SNAT HA router is created with the name routerHA.

    $ neutron router-create routerHA --distributed=True --ha=True
  2. Set the gateway for the external network and add interface

    $ neutron router-gateway-set routerHA <ext-net-id>
    $ neutron router-interface-add routerHA <private_subnet_id>
  3. Once the router is created, gateway set and interface attached, you now have a router with high availability.

6.3 Test Router for High Availability Edit source

You can demonstrate that the router is HA by running a continuous ping from a VM instance that is running on the private network to an external server such as a public DNS. As the ping is running, list the l3 agents hosting the router and identify the agent that is responsible for hosting the active router. Induce the failover mechanism by creating a catastrophic event like shutting down node hosting the l3 agent. Once the node is shut down, you will see that the ping from the VM to the external network continues to run as the backup l3 agent takes over. To verify the agent hosting the primary router has changed, list the agents hosting the router. You will see a different agent is now hosting the active router.

  1. Boot an instance on the private network

    $ nova boot --image <image_id> --flavor <flavor_id> --nic net_id=<private_net_id> --key_name <key> VM1
  2. Log into the VM using the ssh keys

    ssh -i <key> <ipaddress of VM1>
  3. Start a ping to X.X.X.X. While pinging, make sure there is no packet loss and leave the ping running.

    $ ping X.X.X.X
  4. Check which agent is hosting the active router

    $ neutron l3-agent-list-hosting-router <router_id>
  5. Shutdown the node hosting the agent.

  6. Within 10 seconds, check again to see which L3 agent is hosting the active router

    $ neutron l3-agent-list-hosting-router <router_id>
  7. You will see a different agent.

Print this page