User Tools

Site Tools


Foreman/QuickStack Guide:

This page aims to provide a step by step guide to replicate experiment #1 shown in this page.

The guide will cover setting up an OPNFV target environment capable of executing Tempest.


OPNFV testing environment is achieved by using a handful of open source tools. They are listed in order below in hierarchical order:

  1. Vagrant - Create and configure lightweight, reproducible, and portable development environments.
  2. VirtualBox - A "hosted" hypervisror used to host the Foreman node.
  3. Khaleesi - An Ansible framework responsible kicking off builds and tests
  4. Foreman - A baremetal/virtual host management tool
  5. OPNFV/Genesis - Puppet modules for invoking QuickStack
  6. QuickStack - Puppet modules for installing/configuring OpenStack + OpenDaylight
  7. OpenStack Puppet Modules (OPM) - Used to install OpenStack
  8. OpenDaylight Puppet Module - Used to install OpenDaylight

The tools above work together to create the OPNFV target system, but they are not dependent on each other. For example, instead of using Foreman you could just use another baremetal provisioner or simply use raw puppet to install OPNFV. Khaleesi contains a playbook/library to interact with Foreman, but is also used to provision other OpenStack clouds (rackspace, etc.) and is capable of using other OpenStack installers. The order below is based on the script running on a baremetal server.

The order of operations for how these tools interact from start to end are as follows:

Vagrant → invokes VirtualBox to build a CentOS VM for Foreman node and a shared filesystem with the host →

Khaleesi → invokes playbook to rebuild Foreman nodes →

Foreman → installs CentOS and Puppet agent to nodes →

Puppet Agent on each node → checks in and applies OPNFV/Genesis →

OPNFV/Genesis → installs/configures OpenStack and ODL using QuickStack, OPM, and ODL modules.

Khaleesi → invokes playbook to install and configure the Tempest Foreman node →

Khaleesi → runs Tempest and provides results

The below diagram presents a brief summary of the various components in the installer and interactions between them.

Click here for more details on tool interactions

QuickStack/Foreman Video Recordings


  1. One provisioning server and 2-3 node servers.
  2. All servers will run CentOS 7.
  3. Provisioning Host installed with CentOS7
  4. Provisioning Host is on Baremetal or a VM capable of running Virtualbox with hardware virtualization extensions (Intel VT-x or AMD-V) and Physical Address Extensions (PAE/NX) enabled.
  5. Management network should not have a DHCP server if using Foreman. Foreman will run its own DHCP server.
  6. If behind a firewall in your network you will need to use a proxy. This guide has instructions on how to setup the tools to use your network's proxy server.
  7. If OpenStack is being deployed onto baremetal servers, the control and compute nodes need to be:


Below is the topology being used in Intel POD 1:


It is now recommended you follow the "Automatic Deployment" section below for installation. To replicate an full manual install you should follow all of the steps below. The steps are broken down by instructions per tool, in case you are only interested in using part of the OPNFV install:

ISO Installation

Download the Foreman ISO
The ISO is too large to fit on a DVD, use isohybrid and dd to write it to a usb stick:
laptop$ isohybrid arno.2015.1.0.foreman.iso
laptop$ sudo dd if=arno.2015.1.0.foreman.iso of=/dev/sdX bs=4M
laptop$ sync

* important to replace /dev/sdX with the device of your USB stick *

Next boot off of the ISO and run the CentOS installation. The file referenced in the next section Automatic Deployment is installed with the ISO. Continue the OPNFV installation with the Automatic Deployment instructions.

Automatic Deployment

Foreman/QuickStack can now be automatically deployed!

A simple bash script ( will provision out a Foreman/QuickStack VM Server and 4-5 other baremetal nodes in an OpenStack HA + OpenDaylight environment.


  • At least 5 baremetal servers, with 3 interfaces minimum, all connected to separate VLANs
  • DHCP should not be running in any VLAN. Foreman will act as a DHCP server.
  • On the baremetal server that will be your JumpHost, you need to have the 3 interfaces configured with IP addresses
  • On baremetal JumpHost you will need an RPM based linux (CentOS 7 will do) with the kernel up to date (yum update kernel) + at least 2GB of RAM
  • Nodes will need to be set to PXE boot first in priority, and off the first NIC, connected to the same VLAN as NIC 1 of your JumpHost
  • Nodes need to have BMC/OOB management via IPMI setup

How It Works

  1. Detects your network configuration (3 or 4 usable interfaces)
  2. Modifies a "ksgen.yml" settings file and Vagrantfile with necessary network info
  3. Installs Vagrant and dependencies
  4. Downloads Centos7 Vagrant basebox, and issues a "vagrant up" to start the VM
  5. The Vagrantfile points to as the provisioner to takeover rest of the install

  1. Is initiated inside of the VM once it is up
  2. Installs Khaleesi, Ansible, and Python dependencies
  3. Makes a call to Khaleesi to start a playbook: opnfv.yml + "ksgen.yml" settings file

Khaleesi (Ansible):

  1. Runs through the playbook to install Foreman/QuickStack inside of the VM
  2. Configures services needed for a JumpHost: DHCP, TFTP, DNS
  3. Uses info from "ksgen.yml" file to add your baremetal nodes into Foreman and set them to Build mode
  4. Issues an API call to Foreman to rebuild all nodes
  5. Ansible then waits to make sure nodes come back via ssh checks
  6. Ansible then waits for puppet to run on each node and complete

Execution Instructions

On your JumpHost, clone or download the bgs_vagrant Arno repo:

              $ sudo -s
              # cd /root
     To download the Arno release
              # git clone -b v1.0
              # cd bgs_vagrant
     Or to use the latest build
              # git clone
              # cd genesis/foreman/ci

Edit opnvf_ksgen_settings.yml → "nodes" section:

      For each node, compute, controller1..3:
                * mac_address - change to mac_address of that node's Admin NIC (1st NIC)
                * bmc_ip - change to IP of BMC (out-of-band) IP
                * bmc_mac - same as above, but MAC address
                * bmc_user - IPMI username
                * bmc_pass - IPMI password
      For each controller node:
                * private_mac - change to mac_address of node's Private NIC (2nd NIC)

Note: Do not change the domain name of the nodes in opnvf_ksgen_settings.yml. They must be in the domain.

Execute via:

      # ./ -base_config $PWD/opnfv_ksgen_settings.yml
get_started_experiment1.txt · Last modified: 2015/09/21 15:59 by Sai Sindhur Malleni