User Tools

Site Tools


Foreman/QuickStack Current Status

ToDo / List of open issues

Jira is now used for managing the ToDo List. Please see the BGS Issue List in the Jira OPNFV setup. You will need your Linux Foundation Login to access Jira.

List of integration points with CI

Sequence of integration points with CI ("touch points where your installer is expected to be kicked by Jenkins - along with what you expect to deliver back in case of success or failure").

  • pointer to script to kick off installer

Status Updates


  • Missed status report for last week, so status was: created and committed ceph puppet modules which would create ceph OSDs and Mons (3 for 3 controllers)
  • is working. Still waiting on reviews+commits into gerrit, but local copy works
  • Deploy works in intel pod1, going to try on LF POD 1


  • OpenStack HA now working in Intel Pod 1. 3 Controllers with pacemaker, galera (mariadb), keystone, glance, nova, horizon, cinder, neutron with HA proxy.
  • ODL is also integrated into HA. There seems to be some issues with ODL provisioning VXLAN tunnels incorrectly. Will investigate.
  • Ceph is now working with Cinder. Plan is to create an OSD on 1 controller, and have all 3 controllers be ceph mons.
  • script has been submitted to gerrit for review.
  • Planning on a commit this week to give HA and ceph functionality to the genesis puppet modules.
  • Need to add support to Khaleesi as well.


  • First iteration of build script is complete. Needs to be tested. Will do that this week and hopefully commit to genesis by Friday.
  • Build script installs Vagrant+Khaleesi, then Khaleesi will install + configure Foreman in a 1 or 3 network topology. Only baremetal provisioning is supported right now as adding virtual support will come in a future patch.
  • Dan Radez has most of the HA working with the SpinalStack Cloud puppet modules. He will be integrating ODL and hopefully switching out QuickStack for SpinalStack this week.
  • Tuesday 3/24 will be the second session of the Foreman/QuickStack walkthrough. We will go over how Foreman is used to help create the target OPNFV system.


  • Progress has been made on the script to install the provisioning server. Current plan is for the script to bring up a Vagrant CentOS7 VM, then use Khaleesi to install/configure Foreman+ puppet modules.
  • Dan Radez is integrating SpinalStack puppet modules into our current installer which will provide HA support.
  • Suresh Subramanian is working on testing out the user guide in a completely virtual scenario. His efforts should help us add support for being able to deploy using VM's and enhance the user guide.
  • User guide has been changed and broken up into individual pieces. There is now also a topology diagram as well as steps on how to use Khaleesi to run Tempest.


  • Khaleesi is now working on POD1. Khaleesi will install/configure Tempest, rebuild the Foreman nodes then run Tempest.
  • Tim Rozet to commit changes to Khaleesi and provide instructions on the wiki.
  • Foreman/QuickStack wiki is going to be changed to break it up into pieces and easier to follow.
  • POD2 is now up in Intel lab.
  • OpenStack controller HA is coming along. Dan Radez has gotten RabbitMQ and database HA working.
  • Will update the wiki this next week with current toplogy diagram and info.
  • POD1 should be ready to integrate into Jenkins and test out Khaleesi jobs.


  • Tempest is now installed/running on a baremetal node in the Intel testbed. Working on a puppet module to install/configure tempest automatically.
  • Was able to use Foreman to launch KVM VM's through libvirt. This confirms we can use this for a demo virtualized deployment for the installer if we want. Also, we should move the Tempest node to be virtualized.
  • Pod 2 is almost ready in the Intel lab for use.
  • Dan Radez is still looking into HA options for control node in OpenStack.
  • Tim Rozet working on the temptest puppet module, then Khaleesi integration.


  • Foreman will now install/rebuild OpenStack to Intel lab (Topology is 1 Compute Node and 1 Network+Control Node)
  • There are 5 servers currently in Intel lab. 2 have BMC issues, 2 are being used as mentioned above, and the final one will be used to test HA.
  • Changes to appropriate modules have been made in the Intel lab (official commits pending) to install OpenDaylight and configure it as a neutron plugin. These are optional parameters set in Foreman to decide to use OpenDaylight or other ML2 driver.
  • There are two issues blocking more testing with ODL as the neutron plugin right now:
    • When OpenDaylight comes up, it can take up to 60 seconds for all of the features to install and become active. We need to wait for all the features to be up before we enable openvswitch and point to ovsdb as the manager. This ensures we do not miss the time period where ovsdb will detect the new node and configure br-int on it.
    • There is another issue where sometimes Neutron sends API calls to ODL, where the result is Neutron receives a 200 OK from ODL, while ODL never pushes any flows to openvswitch. This needs to be looked at so we can reliably know what we are configuring and get rid of false positives.
  • Tim Rozet is working on a fix for issue 1. While issue 2 is TBD after 1.
  • Dan Radez is working on getting HA integration done.
  • Joseph Gasparakis is working on putting together a build script for foreman in order to make it available to Octopus
get_started/foreman_quickstack_status.txt · Last modified: 2015/04/17 15:28 by Tim Rozet