Please make sure you have gone through Initial Foreman Setup Guide before attempting this guide. This guide builds on top of the previous guide by replacing and adding more infrastructure to enable OpenDaylight installation and configuration into the Neutron/ML2 plugin.
Please contact firstname.lastname@example.org or email@example.com with any questions.
Per our testing, we have used a single Compute Node + one Control/Network consolidated node. Each node will run its own OVS instance, connected to ODL (OpenDaylight). ODL will be running on the Control/Network node along with all of the usual OpenStack services. l2_population, ovs_agent are disabled, but L3_agent, DHCP Neutron services are enabled.
We have tested successful bring up of two tenants and with DHCP in an private VXLAN network and been able to ping between them. Limited testing has been done so far, and there are more scenarios that need to be tried/tested.
The guide currently relies on puppet module changes to puppet-trystack and quickstack (astapor). It also requires two new puppet modules to be installed on your foreman-server: a puppet-opendaylight module as well as a puppet-wait-for module. New global parameters have to be added to Foreman as well to tell QuickStack we want to use ODL as our network driver.
There are new global parameters needed with the new puppet modules. Additional ones are shown below:
If using the same server hardware or VMs throughout your environment then you can get away with just setting the private interface name as the global value for ovs_tunnel_if (since they will all be the same name). However if you use different model servers or different NIC interfaces you need to override this parameter per host. To do this in Foreman Web GUI:
Run puppet agent –test on all nodes to update them. Once everything is complete on your network node you should be able to see service "opendaylight" running. You should also check OVS instances to make sure br-int is created and default flows are pushed by opendaylight: