Virtual IP with OpenStack Neutron

Nuriel Shem-Tov
7 min readSep 6, 2017

--

Preamble

As part of Jexia’s infrastructure team we are often required to deploy services in a highly-available fashion. Since we use OpenStack as our cloud provider, we had to pull this off using OpenStack’s networking software — Neutron.

Neutron has the ability to assign a “shared-IP” (or Virtual-IP) to any number of virtual machines. In Openstack, a virtual machine has a default Neutron port providing its default IP address.

We can “pair” the shared-IP with this port. This allows the virtual machine to configure both its default IP and shared IP on a single interface.

In this blog post we would like to share how we leverage OpenStack’s Neutron shared-IP to provide a highly available service. We decided a proof-of-concept tutorial is a good way to do it.

Ergo, to get the most out of this blog post, you need to have:
- basic experience working with OpenStack (Neutron)
- understanding of the requirements for deploying a highly-available service

Setup

The basic setup

Okay, let’s explain this diagram:

We have an “external” network (10.10.0.0/24) and an internal network (192.168.168.0/24).

Note: this is a “fake” external network for the purpose of this blog. In reality, any available external network addresses can be used.

Within the internal network we have two web-servers: vm01 and vm02.

Both are running httpd and keepalived services (more about keepalived later). The IP address 192.168.168.8 is allocated to vm01 and 192.168.168.11 allocated to vm02.

In addition, there is a virtual-IP (VIP, also referred to as a shared-IP) with the address 192.168.168.3. This IP address has an association to an IP address on the external network: 10.10.0.237 which is a “floating IP”.

The floating IP is the address a client on the internet would use to reach any web page served on the vm01 and vm02 web-servers.

To sum this up:

  • External network: 10.10.0.0/24
  • Internal network: 192.168.168.0/24
  • Virtual IP (shared IP): 192.168.168.3
  • Floating IP (external): 10.10.0.237
  • vm01: 192.168.168.8
  • vm02: 192.168.168.11

Here’s a Simplified Scenario…

Let’s imaging a client accesses the address http://10.10.0.237/index.html. The address would translate to its associated internal IP 192.168.168.3 (i.e. VIP).

It is important to note that the VIP can only be configured on one of the web-servers at any given time. This server will be the one to respond to a client’s request.

Suppose that vm01 dies out for some reason. The VIP will then be made available on vm02, hence the client can still use the same external IP. This magic is being handled by keepalived.

Steps Breakdown

Listed below are the steps we are going to take to configure our highly available setup in OpenStack:

  1. Networking Setup
    Create an external network 10.10.0.0/24, an internal network 192.168.168.0/24 and finally a virtual router with an interface to each one of these networks.
  2. Security Rule
    Add security group rule to allow protocol 112 (VRRP) for keepalived to work.
  3. Deploy VMs
    Deploy two CentOS 7 instances (virtual machines) onto the internal network 192.168.168.0/24 (each VM will automatically get a unique address from this subnet) — vm01: 192.168.168.8, vm02: 192.168.168.11
  4. Create VIP
    Create a Neutron port on the internal network. In our case it gets the IP address 192.168.168.3 — this is the Virtual IP which will be shared by both VMs.
  5. Create Floating IP
    Allocate a floating IP on the external network. In our case it is 10.10.0.237). This will be the IP to which external users/public will connect to.
  6. Associate Floating IP to VIP
    Associate the floating IP we’ve created earlier (10.10.0.237) to the VIP address we’ve created (192.168.168.3).
  7. Update VM Ports
    Update the VM’s network ports using the “allowed-address-pairs” and provide the VIP (192.168.168.3).
  8. Install Services
    Install keepalived and httpd on both VMs (for now we can access them from the controller node using ip netns exec <id> ssh centos@${vm_ipaddress} -i sshkey)
  9. Configure Services
    Configure keepalived on both VMs (vm01 as master, vm02 as backup). Edit the default index.html of httpd to output the hostname or some string to help identify which VM is replying.
  10. Test
    Shutdown the master VM (vm01) and keep running curl to the floating IP (10.10.0.237). We should see the backup VM replying after a few moments.

Commands

Note that we are using TripleO’s Undercloud/Overcloud setup. We are executing the commands from the Undercloud to control the Overcloud.

In addition, it is worth mentioning that we provide these commands as a reference. They should not be entered exactly as they appear as your environment might be configured differently.

Step 1

Create external network and subnet:

Create internal network and subnet:

Create a virtual router, set the external network as gateway and add an interface to the internal network:

Step 2

Run the following commands to ensure protocol 112 (VRRP) is allowed in the default security group:

(Note that you can also create a custom policy group instead of using the default)

Step 3

Proceed to create the two virtual machines with default interfaces on the internal network we’ve created in step 1:

Step 4

In this step we create the virtual IP. We need to know both the subnet and network IDs on which we create the VIP.

In the first command we list the existing networks. We make note of the subnet ID and network ID.

In the second command we specify the subnet ID and network ID on which to create the new port.

Step 5

Allocate a floating IP from the external network

Step 6

Associate the new floating IP’s ID with the port’s ID of the VIP we’ve created in step 4:

Okay, let’s have a quick summary of what we’ve done to this point:

We’ve…

  • created external and internal networks
  • deployed two virtual machines
  • created a virtual IP and a floating IP
  • associated the two IPs so that traffic to the floating IP reaches the virtual IP

We will proceed with the list of steps as we allow the virtual machines to make use of the virtual IP. Then, we will configure them to act as web servers. Finally, we will configure keepalived and then test our setup.

Step 7

First we must run nova list to see which IP addresses have been allocated to the virtual machines.

Then we can run neutron port-list to correlate those IPs to their respective port ID.

We see that the VMs have the IPs 192.168.168.8 and 192.168.168.11 which correspond to the port IDs of 7151f77a-b590–422a-bc4e-a630a7b578d4 and 9a5e3e6c-b446–4721–84dc-ae9f9de982c4 respectively.

Using these port IDs and VIP address we can update the ports to allow “address-pairing” with the VIP address 192.168.168.3:

Now, if we view one of the ports we see that the Allowed Address Pairs is update with the VIP:

Step 8

Installing httpd and keepalived:

There are a few ways in which we could have these services installed and the VMs configured. One would be to provide a user script upon VM’s creation.

Since we haven’t done this, we’ll need to SSH to the VM’s, install and configure manually.

One way to SSH is via ip net namespace. Alternatively, we can associate floating IPs to the VM’s real IPs so that they can be accessed externally.

A third way would be to use a “jump” server which happens to be on the same internal network and has a floating IP. We can use that IP to access it externally.

In the following example we will simply SSH using IP net namespace via the OpenStack controller. Make sure you have the ssh key on the controller so that it can be used for authentication.

The following steps are to be performed on both vm01 and vm02 apart from the keepalived.conf which is different on each VM.

Now, using the qdhcpd netns ID, we can access the VM’s:

Then install keepalived and httpd. Enable and start httpd:

Write an identification sting to the default html page. Later on this will tell us which VM is replying:

Step 9

For the keepalived.conf we have a per-VM configuration file:

For vm01:

For vm02:

Enable and start keepalived on both VMs:

On vm01 you can already see that the VIP is configured on eth0 along side the default IP:

Note that firewall might block VRRP (protocol 112) packages. In such case you need to allow those in the firewall:

# firewall-cmd --add-rich-rule='rule protocol value="vrrp" accept' --permanent

# firewall-cmd --reload

If you are using “plain” iptables you can run something like:

# iptables -A INPUT -p vrrp -i eth0 -j ACCEPT

Take into account that selinux can deny access from keepalived. To resolve this you can run:

grep keepalived_t /var/log/audit/audit.log | audit2allow -M keepalived_t

and

semodule -i keepalived_t.pp

Step 10

The floating IP 10.10.0.237 is associated to the VIP 192.168.168.3 which is currently active on vm01. Therefore, we should be able to ping it at this point.

And if we run curl to this address we should get vm01's default html page reply:

Let’s shutdown vm01 and try curl once more, we should see vm02 replying as it has taken over the VIP:

Conclusion

In this blog we introduced you to Neutron’s address-pairs (shared IP) and how to use them to allow for a highly available service.

In more complicated scenarios, you might want to use HAProxy load-balancer together with keepalived spawning across multiple VMs (e.g. 5 Kubernetes master nodes). The idea is the same, as the shared IP has to be able to be configured on any of the VMs in the cluster.

We outlined the manual steps you need to take to configure the VMs in a highly-available setup. These steps can certainly be automated.

At Jexia, we need our setup to allow for auto-scaling. This means we can automatically deploy VMs and configure them to either extend an existing cluster, or form a new one.

To do that, we need to fully automate the configuration in Neutron in several key areas. These include the settings for allowed-address-pairs for newly deployed VMs, the configuration of the load-balancers, keepalived and any service that is required on the cluster.

You can refer to a previous article posted at Jexia regarding auto-scaling and Serverless Computing: https://medium.com/p/a9a8162f4983

Sources

Neutron Specs, Allowed Address Pairs: https://specs.openstack.org/openstack/neutron-specs/specs/api/allowed_address_pairs.html

RedHat, Configure Allowed Address Pairs: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/9/html/networking_guide/sec-allowed-address-pairs

Highly Available VIPs on Openstack: https://blog.codecentric.de/en/2016/11/highly-available-vips-openstack-vms-vrrp/

--

--