User Tools

Site Tools


Packstack OPNFV Sandbox

This OPNFV SandBox is intended to be a laptop compatible dev environment. It is currently under development. Get involved, download it and help evaluate, debug and improve!

SandBox currently exists as a template for further sandbox work with packstack, it brings up two nodes on a machine with at least 6GB of memory. One controller and One Compute/Networking node. It is intended to be easy to modify to meet our/your needs. The nodes can be reached after vagrant up with vagrant ssh controller vagrant ssh compute

SandBox getting started

Can be run in bridged mode or nat mode, see below for details
It's possible that the documentation on the github page will be more current

nodes can be reached after vagrant up with

  vagrant ssh controller
  vagrant ssh compute 


Get VirtualBox

Get Vagrant

Install vagrant-vbguest

  vagrant plugin install vagrant-vbguest

Get this repo

  git clone && cd PackStackSandBox

Nat Mode

Copy Vagrantfile.yml.template.natmode to Vagrantfile.yml

Nat networking will provide the gateway to the internet as well as connectivity between hosts throught the vboxnetX interface created by vagrant

Setup Masquerade/Forwarding on your host to you vboxnet interface


make sure these are set in /etc/sysctl.d

  net.ipv4.ip_forward = 1
  net.ipv4.conf.all.proxy_arp = 1

And loaded

  sudo sysctl -p

In my example my hosts interface for internet connetiviy is docker0 (yours might be eth0 for example) and my the vboxnet brought up by vagrant up is vboxnet4 and the subnet I have set for the sandbox machines in the vagrantfile.yaml is

  iptables -A FORWARD -o docker0 -i vboxnet4 -s -m conntrack --ctstate NEW -j ACCEPT
  iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
  iptables -A POSTROUTING -t nat -j MASQUERADE

In this example we have set the vboxnet to the range.


Don't have a mac, this is untested

  /usr/sbin/natd -interface en0
  /sbin/ipfw -f flush
  /sbin/ipfw add divert natd all from any to any via en0
  /sbin/ipfw add pass all from any to any
  sudo sysctl -w net.inet.ip.forwarding=1

Vagrant ssh into the compute and the controller node and set the default route to vboxnet0 rather than the nat device that vagrant sets at default

TODO automate this.

  ip route del default
  ip route add default via (the gateway set it your Vagrantfile.yml) dev eth1

You are now ready to "Launch Vagrant" (see below)

Bridged Mode

If you are able to configure and use a bridge we can bring up openstack VMs on your local network. you will need a netmask of 23 or below.

My bridge in this readme is called docker0

  $ brctl show
  bridge name     bridge id               STP enabled     interfaces

To start copy Vagrantfile.yml.template.bridgemode to Vagrantfile.yml to reflect the network avaliable to you. In this example I have a /22 avaliable on my home network, Later we reserve a /24 section of my /22 network for the neutron router we create.

My example config:

  bridge: docker0  

You are now ready to "Launch Vagrant" (see below)

Vagrantfile.yml Explanation

Warning, make sure there are no trailing white spaces in this file

nat_mode: set to yes for nat mode, leave blank for bridge mode

bridge: name of your bridge interface ($ brctl show ) leave blank for nat mode

netmask: netmask of your private subnet, probably given to you via dhcp. you can see this with ifconfig, however on osx if will be in the unreadble format, something like 0xffffff00 Refer here for a table that human can read. Most home networks only give out a /24 you will need to log into your router and change your range to at least a /23 so that we an properly route to the router that neutron creates.

gateway: For bridged mode Your workstations gateway to the internet (your routers ip, this is also the ip you go to to increase your network size ) you can check this with ip r on linux or netstat -nr on osx For nat mode set this to the first ip in the range you are choosing for private_ip

neutron_router_start: This will be the start of your openstack dhcp, I also use this as your neutron router gateway. give neutron its own /24 range

neutron_router_end: the end of the range explained above


  • *bridged_ip: this interface should be given an ip on the same /24 as your workstation. private_ip: this interface can have any ip you want, virtualbox deals with the routing. compute: bridged_ip: same rules as the controller bridged_ip but unique private_ip:** same rules as controller: private_ip but unique

for nat mode set the bridged_ip and private_ip to the same values for each host (as seen in Vagrantfile.yml.template.natmode)

Launch Vagrant

  vagrant up

ssh into the vagrant controller (password is vagrant)

  vagrant ssh controller

run packstack (for nat mode complete steps below first)

  cd /vagrant
  sudo bash
  packstack  --answer-file=ans.txt && yes|cp /root/keystonerc_admin /vagrant

the answerfile is generated from ans.template or ans.NAT.template when you run vagrant up. packstack should now prompt you for the root password of both nodes. The password is "vagrant" if packstack fails for some reason, just run it again.


To setup networking, and launch the cirros minimal VM you must wait for the above operations to complete. (packstack and copying the keystonerc_admin) Once those are done, vagrant ssh into the networking (compute node):

  vagrant ssh compute
  [vagrant@compute]# sudo bash
  [root@compute ]# cd /vagrant && ./SetupComputeNode

That's it everything should work now.



Natmode: http://localhost:8080/dashboard/


http://compute.bridged.ip from your vagtantfile.yml


ssh into the CirrosVM spawned by ./SetupComputeNode and ping the outside world

  [root@compute vagrant]# source keystonerc_admin
  [root@compute vagrant(keystone_admin)]# neutron floatingip-list
  | id                                   | fixed_ip_address | floating_ip_address | port_id                              |
  | ea3d5757-e646-4d6e-9c0d-e6304cee3ff0 |       |           | 53157795-741e-479c-afb6-1ceb26fd500e |
  [root@compute vagrant(keystone_admin)]# ssh cirros@
  cirros@'s password: cubswin:)
  $ ping
  PING ( 56 data bytes
  64 bytes from seq=0 ttl=50 time=24.782 ms
  64 bytes from seq=1 ttl=50 time=23.527 ms


Ideally this sandbox will be loaded with usefull tools enumerated here. Right now there are some scripts that I use to setup the networking node

SetupNeutron: This sets up neutron with a router for external connectivity for your VM's, this file is generated by ./build_SetupNeutron

SwitchToQemu: KVM is not supported inside virtualbox, this script switches to qemu

LaunchCirrosVM: Launches a vm with the name $1

DeleteNetwork: Runs throught some loops and removes all openstack networking, must be run on the compute node


When restarting netwoking, the neutron switch become unresponsive, you'll need to restart various neutron components

  service network restart
  for i in dhcp-agent l3-agent metadata-agent openvswitch-agent; \
  do service neutron-$i restart; done
  neutron agent-list
  #takes me 38 seconds before I can ping a the router

Vagrant exits with a syntax error

  Message: undefined method `[]' for nil:NilClass

Try running the included ./testyaml you may need to install the ruby yaml library

Vagrant Can't download the box on OSX

  vagrant box add --name controller 
  vagrant init controller

This will help you debug some wierd permission erros that we've seen on osx

Wierd locale issue.

  ERROR : Error appeared during Puppet run:
  Notice: /Stage[main]/Glance::Registry/Exec[glance-manage db_sync]/returns: ValueError: unknown locale: UTF-8

Edit your /etc/ssh_config file on your Mac OS X system and remove LC_CTYPE from SendEnv. This will cause the ssh client to stop propagating LC_CTYPE to the ssh servers.


Vagrant reconfigures the network device eth1 on boot. You will need to run /vagrant/SetupComputeNodeAfterReboot each time the compute node is rebooted.


Fork this repo Create your feature branch (git checkout -b my-new-feature) Commit your changes (git commit -am 'Add some feature') Push to the branch (git push origin my-new-feature) Create new Pull Request

wiki/sandbox.txt · Last modified: 2015/02/27 18:51 by Aric Gardner