This is an old revision of the document!
This will setup a basic functional assessment platform (a learning "academy") ala Sandbox prior to getting a BGS environment that we can use. As compared to Sandbox, the goal of this activity is rather to install the actual OPNFV components as listed for BGS, under a single node (e.g. a laptop with lots of memory) or multi-node environment.
The procedure below has been used to create a 3-node OPNFV install using Intel NUC i7 nodes with 16GB RAM, 250MB SSD, and 1 TB HDD. The install uses one NUC for the Jumphost, and one each for the controller (OpenStack + ODL) and the compute nodes.
ip addr
su visudo opnfv ALL=(ALL) ALL
sudo yum -y update
sudo shutdown -r 0
sudo vi /etc/hostname jumphost1.opnfv.org
sudo systemctl stop NetworkManager sudo systemctl disable NetworkManager
sudo vi /etc/sysconfig/network-scripts/ifcfg-enp0s25 TYPE="Ethernet" BOOTPROTO="static" IPADDR=192.168.10.2 NETMASK=255.255.255.0 GATEWAY=192.168.10.1 NM_CONTROLLED="no" (rest as-is)
sudo service network restart sudo setenforce 0
sudo vi /etc/resolv.conf 8.8.8.8
sudo setenforce 0 sudo sed -i 's/SELINUX=.*/SELINUX=permissive/' /etc/selinux/config
sudo systemctl stop firewalld sudo systemctl disable firewalld
sudo yum -y install ntp sudo systemctl start ntpd date
sudo yum -y install net-tools
sudo yum -y install git cd ~
git clone https://blsaws@gerrit.opnfv.org/gerrit/genesis
curl "https://gerrit.opnfv.org/gerrit/gitweb?p=genesis.git;a=snapshot;h=756ee8c81cfac9a69e8f67811429e63da9af6480;sf=tgz" -o genesis-756ee8c.tar.gz gzip -d genesis-756ee8c.tar.gz tar -xvf genesis-756ee8c.tar
vi ~/genesis-756ee8c/foreman/ci/bootstrap.sh if ! git clone -b opnfv https://github.com/blsaws/khaleesi.git; then
vi ~/genesis-756ee8c/foreman/ci/opnfv_ksgen_settings_no_HA.yml # Jumphost1: # (for compute1:) name: oscompute1.{{ domain_name }} hostname: oscompute1.{{ domain_name }} short_name: oscompute1 mac_address: "B8:AE:ED:76:FB:C4" # (for controller1:) name: oscontroller1.{{ domain_name }} hostname: oscontroller1.{{ domain_name }} short_name: oscontroller1 mac_address: "B8:AE:ED:76:FB:45" private_mac: "B8:AE:ED:76:FB:45" # Jumphost2: # (for compute1:) name: oscompute1.{{ domain_name }} hostname: oscompute1.{{ domain_name }} short_name: oscompute1 mac_address: "B8:AE:ED:76:C5:ED" # (for controller1:) name: oscontroller1.{{ domain_name }} hostname: oscontroller1.{{ domain_name }} short_name: oscontroller1 mac_address: "B8:AE:ED:76:F9:FF" private_mac: "B8:AE:ED:76:F9:FF"
cd /opt sudo git clone -b opnfv https://github.com/blsaws/khaleesi.git
sudo vi /opt/khaleesi/wakenodes.sh # !/bin/bash yum -y install net-tools ether-wake B8:AE:ED:76:FB:C4 ether-wake B8:AE:ED:76:FB:45 ether-wake B8:AE:ED:76:F9:FF ether-wake B8:AE:ED:76:C5:ED
sudo vi /opt/khaleesi/roles/get_nodes/foreman/tasks/main.yml - script: /opt/khaleesi/wakenodes.sh
sudo vi /opt/khaleesi/library/foreman.py # bryan_att modified to skip IPMI stuff module.exit_json(changed=True, msg="Rebuilding Node") # change elif to if so the module.exit is outside the previous if block if ipmi_host is None:
cd ~/genesis-756ee8c/foreman/ci/ sudo ./deploy.sh -single_baremetal_nic enp0s25 -base_config /home/opnfv/genesis-756ee8c/foreman/ci/opnfv_ksgen_settings_no_HA.yml
su cd /var/opt/opnfv/foreman_vm vagrant destroy -f cd - rm -rf /var/opt/opnfv exit
==> default: TASK: [get_nodes/foreman | wait_for host={{ item.value.hostname }} port=22 delay=10 timeout=1800] *** ==> default: [[ previous task time: 0:00:03.456516 = 3.46s / 746.15s ]] ==> default: ok: [localhost -> 127.0.0.1] => (item={'key': 'controller1', 'value': {'bmc_user': 'root', 'short_name': 'oscontroller1', 'memory': 4096, 'cpus': 2, 'ansible_ssh_pass': 'Op3nStack', 'bmc_ip': '10.4.17.3', 'hostgroup': 'Controller_Network_ODL', 'groups': ['controller', 'foreman_nodes', 'puppet', 'rdo', 'neutron'], 'disk': 40, 'bmc_mac': '10:23:45:67:88:AC', 'admin_ip': '192.168.1.206', 'name': u'oscontroller1.opnfv.com', 'hostname': u'oscontroller1.opnfv.com', 'host_type': 'baremetal', 'private_mac': 'B8:AE:ED:76:F9:FF', 'bmc_pass': 'root', 'admin_password': 'octopus', 'mac_address': 'B8:AE:ED:76:F9:FF', 'type': 'controller', 'private_ip': '192.168.1.206'}}) ==> default: ok: [localhost -> 127.0.0.1] => (item={'key': 'compute', 'value': {'bmc_user': 'root', 'short_name': 'oscompute1', 'memory': 2048, 'cpus': 2, 'ansible_ssh_pass': 'Op3nStack', 'bmc_ip': '10.4.17.2', 'groups': ['compute', 'foreman_nodes', 'puppet', 'rdo', 'neutron'], 'disk': 40, 'bmc_mac': '10:23:45:67: ==> default: 88:AB', 'admin_ip': '192.168.1.207', 'name': u'oscompute1.opnfv.com', 'hostname': u'oscompute1.opnfv.com', 'host_type': 'baremetal', 'hostgroup': 'Compute', 'bmc_pass': 'root', 'admin_password': '', 'mac_address': 'B8:AE:ED:76:C5:ED', 'type': 'compute'}}) ==> default: ==> default: TASK: [get_nodes/foreman | set fact with hostnames] *************************** ==> default: [[ previous task time: 0:14:11.636023 = 851.64s / 1597.79s ]] ==> default: ok: [localhost] => (item={'key': 'controller1', 'value': {'bmc_user': 'root', 'short_name': 'oscontroller1', 'memory': 4096, 'cpus': 2, 'ansible_ssh_pass': 'Op3nStack', 'bmc_ip': '10.4.17.3', 'hostgroup': 'Controller_Network_ODL', 'groups': ['controller', 'foreman_nodes', 'puppet', 'rdo', 'neutron'], 'disk': 40, 'bmc_mac': '10:23:45:67:88:AC', 'admin_ip': '192.168.1.206', 'name': u'oscontroller1.opnfv.com', 'hostname': u'oscontroller1.opnfv.com', 'host_type': 'baremetal', 'private_mac': 'B8:AE:ED:76:F9:FF', 'bmc_pass': 'root', 'admin_password': 'octopus', 'mac_address': 'B8:AE:ED:76:F9:FF', 'type': 'controller', 'private_ip': '192.168.1.206'}}) ==> default: ok: [localhost] => (item={'key': 'compute', 'value': {'bmc_user': 'root', 'short_name': 'oscompute1', 'memory': 2048, 'cpus': 2, 'ansible_ssh_pass': 'Op3nStack', 'bmc_ip': '10.4.17.2', 'groups': ['compute', 'foreman_nodes', 'puppet', 'rdo', 'neutron'], 'disk': 40, 'bmc_mac': '10:23:45:67:88:AB', 'admin_ip': '192.168.1.207', 'name': u'oscompute1.opnfv.com', 'hostname': u'oscompute1.opnfv.com', 'host_type': 'baremetal', 'hostgroup': 'Compute', 'bmc_pass': 'root', 'admin_password': '', 'mac_address': 'B8:AE:ED:76:C5:ED', 'type': 'compute'}}) ==> default: ==> default: TASK: [get_nodes/foreman | make a list] *************************************** ==> default: [[ previous task time: 0:00:00.015421 = 0.02s / 1597.81s ]] ==> default: ok: [localhost] ==> default: ==> default: TASK: [get_nodes/foreman | debug var=nodes_list] ****************************** ==> default: [[ previous task time: 0:00:00.008785 = 0.01s / 1597.81s ]] ==> default: ok: [localhost] => { ==> default: "var": { ==> default: "nodes_list": [ ==> default: "oscontroller1.opnfv.com", ==> default: "oscompute1.opnfv.com" ==> default: ] ==> default: } ==> default: } ==> default: ==> default: TASK: [get_nodes/foreman | Wait for puppet to complete] *********************** ==> default: [[ previous task time: 0:00:00.008461 = 0.01s / 1597.82s ]] ==> default: changed: [localhost] ==> default: ==> default: msg: ==> default: Nodes are Active ==> default: ==> default: TASK: [get_nodes/foreman | Print host openstack network type (nova/neutron)] *** ==> default: [[ previous task time: 0:12:01.201807 = 721.20s / 2319.02s ]] ==> default: ok: [localhost] => { ==> default: "var": { ==> default: "provisioner.network.type": "nova" ==> default: } ==> default: } ==> default: ==> default: TASK: [get_nodes/foreman | debug var=nodes_created] *************************** ==> default: [[ previous task time: 0:00:00.010205 = 0.01s / 2319.03s ]] ==> default: skipping: [localhost] ==> default: ==> default: TASK: [get_nodes/foreman | debug var=hostvars] ******************************** ==> default: [[ previous task time: 0:00:00.023453 = 0.02s / 2319.06s ]] ==> default: skipping: [localhost] ==> default: ==> default: PLAY RECAP ******************************************************************** ==> default: localhost : ok=60 changed=41 unreachable=0 failed=0 ==> default: [[ previous task time: 0:00:00.032833 = 0.03s / 2319.09s ]] ==> default: [[ previous play time: 0:26:37.803885 = 1597.80s / 2319.09s ]] ==> default: [[ previous playbook time: 0:38:39.090976 = 2319.09s / 2319.09s ]] ==> default: [[ previous total time: 0:38:39.091181 = 2319.09s / 0.00s ]] ==> default: Exit cleanup ... init.print_result ==> default: running: init.print_result ==> default: ./run.sh: PASSED ==> default: Running provisioner: shell... default: Running: /tmp/vagrant-shell20150920-11687-22sn6.sh ==> default: Resizing physical volume ==> default: Physical volume "/dev/sda2" changed ==> default: 1 physical volume(s) resized / 0 physical volume(s) not resized ==> default: New physical volume size: 39 ==> default: Resizing logical volume ==> default: Extending logical volume root to 38.48 GiB ==> default: Logical volume root successfully resized ==> default: Filesystem resized to: 39G Foreman VM is up! [opnfv@jumphost2 ci]$
OpenStack running in a VM had touble being reachable from compute hosts outside the main host machine. So dropping back to running OpenStack on the host machine directly, until I figure out how to resolve the connectivity issue (maybe the bridged networking config below would help with that - it's a recent change which was needed to allow the compute hosts running in VMs to connect to OpenStack…).
auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0
glance_host = 192.168.1.132 my_ip = 192.168.122.209 vnc_enabled = True vncserver_listen = 192.168.1.132 vncserver_proxyclient_address = 192.168.1.132 novncproxy_base_url = http://192.168.1.132:6080/vnc_auto.html rpc_backend = rabbit rabbit_host = 192.168.1.132 rabbit_password = opnfv auth_strategy = keystone [keystone_authtoken] auth_uri = http://192.168.1.132:5000 auth_host = ubuntu-1404-openstack auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = opnfv [database] # The SQLAlchemy connection string used to connect to the database connection = mysql://nova:opnfvmysql@192.168.1.132/nova
To address some of the potential snags occurring due to conflicts between OpenStack, ODL, OVS, libvirt, … I am switching to running OpenStack and compute nodes under KVM VMs, with the addition of other compute nodes on other machines. At this point ODL is not in the picture as I have to figure out how the whole flat network approach works first. And the goal for this initial sandbox will be to get off the ground with policy feature assessment in OpenStack, so ODL can wait until I learn it better or get guidance on how to factor it into this setup.
At this point I have most of the above working. What is not currently working is getting the external compute node connected to OpenStack nova, likely some firewall issue with KVM that I need to fix.
I'll post details on how this was setup soon.
Returning to a more direct approach, this time will ensure component independence by installing: * OpenStack on host OS * ODL in VM managed by OpenStack
Decided pretty quickly that as some of the earlier issues may have been with running OpenStack on the host directly, I would switch to trying it in a VM (take 4).
Trying out Foreman to see if it helps get past some of the basic issues with the manual install.
. . .
Here is a graphic of the concept. This is very draft and leaves many things unclear partly because I'm not sure how to do them yet.
This procedure is not being further pursued at the moment… too many undocumented things about the overall setup of OpenStack, ODL, etc…
Restart process for For sandbox based upon host OS: Unbuntu 14.04 Server LTS
Restart process for sandbox based upon host OS: Unbuntu 14.04 Server LTS
Tried CentOS 7 but continually getting errors: