User Tools

Site Tools


Arno Foreman Install on a Single CentOS 7 laptop

Basic Steps

Getting through this may come up with several issues and requires some workarounds. The default method of deployment is "bare metal" meaning you are going to provision out other servers. You need to use "-virtual" as a parameter to in order to provision VMs rather than 5 other bare metal servers. Please follow these steps:

1.  Install CentOS 7 on a server that has minimum
  * 250GB storage
  * 18 GB RAM (10 GB for non-HA)
  * 1 NIC configured with internet access
2.  git clone
3.  cd genesis/foreman/ci
4.  Depending where you are:
  * If in China: ./ -virtual -ping_site -static_ip_range <your_range>
  * Otherwise:   ./ -virtual -static_ip_range <your_range>
  * For non-HA: ./ -virtual -static_ip_range <your_range> -base_config <full path to pwd>/opnfv_ksgen_settings_no_HA.yml

Where <your_range> is a continous block of public IP addresses you can use, e.g.:,

The IP range can be determined by just looking at the "ifconfig" output and starting at the next "x01" in the 192.168 range. For example, in an environment on Wi-Fi at home, this range ended up being,

A successful deployment should complete with the following line:

Virtual deployment SUCCESS!! Foreman URL:, Horizon URL:

Trouble Shooting

APEX-19: Verify at start that default Gateway is in the same subnet as the given IP range

It is noticed that if you pick an IP range whose subnet is not in the same subnet as the default gateway, the created VMs inexplicably cannot access the internet. It would be a good thing to verify at the beginning of the script and report an error if this is discovered.

APEX-11 (Fixed on 09/04/205): Some interfaces ignored by due to regexp bug

If you see the following in the console:

==> default: Specific bridge 'eth_replace0' not found. You may be asked to specify
==> default: which network to bridge to.
==> default: Available bridged network interfaces:
    1) wlo1
    2) enp0s25
==> default: When choosing an interface, it is usually the one that is
==> default: being used to connect to the internet.
    default: Which interface should the network bridge to?

The fact that you hit this prompt at all was a bug. It normally can automatically find the correct interface (it should have found "wl01"), but the script skips any interfaces with "lo" in the name, attempting to skip "loopback" interfaces. The workaround is to change the regexp to look for "^lo" instead of "lo".

The entire change was changing line 945 from this:

    output=`ifconfig | grep -E "^[a-zA-Z0-9]+:"| grep -Ev "lo|tun|virbr|vboxnet" | awk '{print $1}' | sed 's/://'`


    output=`ifconfig | grep -E "^[a-zA-Z0-9]+:" | grep -Ev "^lo|^tun|^virbr|^vboxnet" | awk '{print $1}' | sed 's/://'`

This bug was filed as One concern is that assuming semantics of an interface based on its name is suspect at best.

Please note that APEX-11 has been fixed since September 4, 2015 for both VM and bare metal deployment.

Do Some Clean-up before Restarting

At this point, since the deployment is half done, you may have to do some cleanup before restarting. You can run the "" script. Or if it doesn't work, you may have to go to "/var/opt/opnfv" and go to each of the VMs defined in that folder (an older version of this script created the VMs in /tmp, which made rebooting the box awkward) and do "vagrant destroy" and then verify no VMs were running afterwards, with "vboxmanage list runningvms", and then delete the entire "opnfv" directory.

Handling Firefox if the Attempt to Reach the Foreman URL Fails

The first attempt to reach the Foreman URL from Firefox will likely fail, unless Firefox wasn't running when you ran the deployment. If it was running, just exit it and restart. If this is isn't your first complete run through the deployment, this might still fail because of an invalid certificate, which is installed as part of the deployment. Fix this by going into your Firefox preferences and deleting the cert with "opnfv" in the URL.

Internet Unreachable from VM / APEX-2 (Fixed): Default Vagrant Route Exists in Virtual Setups Post Deployment

Another issue that you may run into is that the internet is initially unreachable from any of these VMs. This is because they somehow get two default routes, one of which is invalid. This is evident if you see something like this:

[root@oscontroller1 ~]# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         UG    0      0        0 enp0s10
default         UG    0      0        0 enp0s3   U     0      0        0 enp0s3

The fix is to remove the "" gateway with:

route del default gw

This is logged at, which should have been fixed.

The Given Horizon URL is Invalid

The given Horizon URL is invalid. This is a "private" IP, and should be a public IP. This is logged at

The workaround is to ssh to the controller ("vagrant ssh") and run "ifconfig" and find an interface with an IPv4 address beginning with "192.168.1.x". In my case, it was

Issues in OpenStack Verification

At this point, you can start the "OpenStack Verification" section of the install guide. You should be able to create the volume, image, and launch a few instances before you hit any issues.

No Instance Displays on the Console

One issue may be that When you try to view the console of either instance, nothing ever display.

The workaround for this is a little gnarly. In order to login to these instances from the shell, you first have to "vagrant ssh" to the controller, then source the "keystonerc_admin" file to set some auth-oriented environment variables, then do "nova list", just to remind yourself of the IP addresses of the instances (which are and for example), then do "ip netns list", which returns a big uuid value representing the dhcp network namespace. You'll use that uuid in a couple other commands.

Still in the controller, you do the following (where $uuid is the uuid value from "ip netns list"):

% ip netns exec $uuid ssh cirros@

Then create another shell from the main box, ssh to the controller, and do this:

% ip netns exec $uuid ssh cirros@

You could now do "ping" from the first one and "ping" from the second one. This represents the end of the "OpenStack Verification" section.

You may want to go one step further and verify that the VM at was really the one being pinged from the VM. You can normally do this by running "tcpdump -i any icmp" on the VM you are trying to ping, and when you ping from the other VM, you can see debug output that shows it is pinging that VM.

Alternatively, you could also run tcpdump from the compute node, specifying the tap interface corresponding to each IP. This requires ssh'ing to the compute node and doing "ovs-vsctl show". This shows the two tap interfaces, but it doesn't show which IP corresponds to each of those tap interfaces. Then you can do an "ifconfig" on the instance and find out the port id.

Then, still on the compute node, do "tcpdump -i <interfacename> icmp" for the first tap interface. Create another shell, ssh to the compute node, and do the same for the other tap interface name. Now, go back to the shell where you're logged into each instance, and do the "ping" to the other instance. You should now see output from one of the tcpdump calls, showing the pings traveling on that interface.

As the VMs are now not in /tmp, it should survive a reboot, but you might have to do "vagrant up" in the controller and compute VMs, and then perhaps do something similar in the Horizon GUI for the two instances. If you bring this up in a different network, you might have trouble with the available IP range. This could be mitigated by first creating a VM for the jumphost and starting the deployment process over.

ipv6_opnfv_project/arno_laptop.txt · Last modified: 2015/09/05 01:38 by Bin Hu