Configuring OpenStack-Ansible for Distributed Virtual Routing with Open vSwitch
Following up from a prior post:
The OpenStack-Ansible project has recently added support for the Open vSwitch ML2 neutron agent in the Newton release…medium.com
the OpenStack-Ansible project has recently added support for enabling Distributed Virtual Routing when using the Open vSwitch ML2 Neutron agent. Support landed in the master branch with these patches and will be released in the Newton cycle.
Now, deployers who wish to use Open vSwitch can also take advantage of the high availability benefits of DVR. With this change, the OpenStack-Ansible project now supports the Scenario: High Availability using Distributed Virtual Routing described in the OpenStack Networking guide.
Using the same lab hardware and network architecture I described in my previous post and running the master version of OpenStack-Ansible, my configuration for enabling DVR looks like:
You may notice that only a single line has changed in this configuration. Where previously, I had neutron_plugin_type: ml2.ovs, I now have neutron_plugin_type: ml2.ovs.dvr
With that simple change, OpenStack-Ansible now knows to install the Neutron L3 and metadata agents on the compute hosts and is able to configure the L3 agent’s agent_mode appropriately depending on whether or not the agent is running on a network or compute host.
If you’ve been waiting for DVR support in OpenStack-Ansible, please give it a try and be sure to provide any feedback to the community. Bug reports in LaunchPad, questions in IRC at #openstack-ansible on FreeNode and emails to the OpenStack-Dev list are all great ways to reach out.