User Tools

Site Tools


copper:academy:joid

These are rough notes being developed as the Joid install environment for Copper is developed.

Joid-Based Basic Install

Basic install guidelines: https://wiki.opnfv.org/joid/get_started and https://wiki.opnfv.org/joid/get_started?&#run_the_deployment_using_joid

  • Make sure you have good internet connectivity, as it's required for package install/updates etc.
  • Install Ubuntu 14.04.03 LTS desktop on the jumphost. Use a non-root user (opnfv, password you choose), which by default should be part of the sudoers group.
    • If the jumphost is attached to a gateway with DHCP disabled (as described on the Academy main page, you will need to assign the static IP to the primary ethernet interface during Ubuntu install (since network autoconfig will fail). Use these values:
      • IP address: 192.168.10.2
      • Netmask: 255.255.255.0
      • Gateway 192.168.10.1
      • DNS: 8.8.8.8 8.8.4.4
  • Install prerequisites
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install bridge-utils vlan git openssh-server -y
mkdir ~/.ssh
chmod 744 ~/.ssh
  • Create a bridge brAdm on eth0, by modifying /etc/network/interfaces file on jumphost. Subnet "192.168.10.x" is used here to avoid conflicts with commonly used subnets in LAN environments.
(leave lo section as-is)
iface eth0 inet manual
auto brAdm
iface brAdm inet static
address 192.168.10.2
netmask 255.255.255.0
network 192.168.10.1
broadcast 192.168.10.255
gateway 192.168.10.1

dns-nameservers 8.8.8.8 127.0.0.1
bridge_ports eth0
  • If you installed Ubuntu desktop, you may need to disable Ubuntu Network Manager. In the case of static IP assignment during Ubuntu install as above, this will not be needed as the primary interface eth0 will be set to "unmanaged" by Network Manager.
sudo stop network-manager
echo "manual" | sudo tee /etc/init/network-manager.override
  • Reboot
  • Clone the joid repo
mkdir ~/git
cd git
git clone http://gerrit.opnfv.org/gerrit/joid.git
  • Set the correct MAC addresses on the controller and compute nodes in the template ~/joid/ci/maas/att/virpod1/deployment.yaml
vi ~/git/joid/ci/maas/att/virpod1/deployment.yaml
  • Start MAAS and Juju bootstrap into two VMs on the jumphost
cd ~/git/joid/ci
./02-maasdeploy.sh attvirpod1
  • If MAAS install is successful, you will see the following result
You are now logged in to the MAAS server at
http://192.168.10.3/MAAS/api/1.0/ with the profile name 'maas'.
  • You should now be able to login to MAAS at http://192.168.10.3/MAAS (ubuntu/ubuntu). However there may have been issues in the commissioning of the jumphost - verify the status of the jumphost in the MAAS Nodes screen is "Ready". If you can't login to MAAS, or the jumphost node status is not "Ready", the commissioning may have failed due to networking issues, e.g. brief internet service disruptions that can interfere with process steps. In that case, just restart the 02-maasdeploy.sh command.
    • Some debugging hints for the MAAS VM
      • ssh ubuntu@192.168.10.3
      • tail -f /var/log/cloud-init-output.log
  • As a temporary workaround for current issues with node creation by MAAS, create the controller and compute machines manually thru the MAAS gui at http://192.168.10.3/MAAS (ubuntu/ubuntu)
    • From the "Nodes" panel, select "Add Hardware/Machine", Machine name "node1-control", MAC Address per your controller's MAC, Power type "Wake-On-LAN", MAC Address per your controller's MAC, and click "Save Machine".
      • You should see the machine power-on. You will be returned to the "Nodes" panel. Select "node1-control", "Edit", set Tags "control", and click "Save Changes".
    • Repeat for your compute node, naming it "node2-compute", specify its MAC, and set the Tags to "compute"
  • When both nodes are in the "Ready" state in the MAAS gui (they will have been powered off by MAAS in after completion), enable IP configuration for the controller's second NIC (eth1). This is a workaround; setup of this NIC should be automated (how TBD)
    • From the "Nodes" panel, select "node1-control" and scroll down to the "Network" section. For "IP address" of eth1, select "Auto assign".
  • Run the OPNFV deploy via Juju
cd ~/git/joid/ci
./deploy.sh -o liberty -s odl -t nonha -l attvirpod1 
./clean.sh
# use the following commands if clean.sh hangs
rm -rf ~/.juju/environments
rm -rf ~/.juju/ssh
  • The deploy.sh script will finish with lines such as the following, if successful
2015-12-04 11:21:02 [INFO] deployer.cli: Deployment complete in 1709.63 seconds
+ echo '... Deployment finished ....'
... Deployment finished ....

+ echo 'deploying finished'
deploying finished
  • However, it's likely not yet complete as the juju charms take a while (30 minutes or more) to fully configure and bring up the services to the "active" status. You can see the current status from the command:
watch juju status --format=tabular
  • Once all the services are either in state "active" or "unknown" (which just means that the JuJu charm for the service does not yet implement status reporting), install should be complete. When it completes, you can get the addresses of the installed services by:
juju status --format=short
- ceilometer/0: 192.168.10.103 (started) 8777/tcp
- ceph/0: compute1.maas (started)
- cinder/0: 192.168.10.104 (started)
- cinder-ceph/0: 192.168.10.104 (started)
- glance/0: 192.168.10.105 (started) 9292/tcp
- heat/0: 192.168.10.110 (started) 8000/tcp, 8004/tcp
- juju-gui/0: bootstrap.maas (started) 80/tcp, 443/tcp
- keystone/0: 192.168.10.106 (started)
- mongodb/0: 192.168.10.111 (started) 27017/tcp, 27019/tcp, 27021/tcp, 28017/tcp
- mysql/0: 192.168.10.107 (started)
- neutron-api/0: 192.168.10.112 (started) 9696/tcp
 - neutron-api-odl/0: 192.168.10.112 (started)
- neutron-gateway/0: controller1.maas (started)
- nodes-api/0: controller1.maas (started)
 - ntp/0: controller1.maas (started)
- nodes-compute/0: compute1.maas (started)
 - ntp/1: compute1.maas (started)
- nova-cloud-controller/0: 192.168.10.108 (started) 3333/tcp, 8773/tcp, 8774/tcp, 9696/tcp
- nova-compute/0: compute1.maas (started)
 - ceilometer-agent/0: compute1.maas (started)
 - openvswitch-odl/0: compute1.maas (started)
- odl-controller/0: 192.168.10.113 (started)
- openstack-dashboard/0: 192.168.10.114 (started) 80/tcp, 443/tcp
- rabbitmq-server/0: 192.168.10.109 (started) 5672/tcp

Installing Additional Tools

  • On jumphost, install OpenStack Python Client (required for access to OpenStack CLI-based operations) and prerequisites
curl https://bootstrap.pypa.io/get-pip.py -O ~/get-pip.py
sudo -H python ~/get-pip.py
sudo -H apt-get install -y python-dev
sudo -H pip install python-openstackclient
sudo -H pip install python-glanceclient
sudo -H pip install python-neutronclient
sudo -H pip install python-novaclient
  • Setup authentication for OpenStack CLI commands: On jumphost, from Horizon / Project / Compute / Access & Security / API Access select "Download OpenStack RC file, transfer to the controller root home directory, then execute it.
source ~/admin-openrc.sh

Verifying OpenStack and ODL Services are Operational

1) On the jumphost browse to, and verify that these services are active (e.g. per the example above):

sudo juju status --format=short | grep openstack-dashboard
sudo juju status --format=short | grep odl-controller

Verifying Services are Operational - Smoke Tests

The following procedure will verify that the basic OPNFV services are operational.

  • Clone the copper project repo:
mkdir ~/git
cd ~/git
git clone https://gerrit.opnfv.org/gerrit/copper
  • Execute "smoke01.sh", which uses the OpenStack CLI to
    • Create a glance image (cirros)
    • Create public/private network/subnet
    • Create an external router, gateway, and interface
    • Boot two cirros instances on the private network
cd ~/git/copper/tests/adhoc
source smoke01.sh
  • From Horizon / Project / Compute / Instances, select "cirros-1" and "Console", and login with account "cirros" / "cubswin:)". Ping address of cirros-2 as shown in Horizon, 192.168.10.1 (external router), and opnfv.org (to validate DNS operation).
  • Execute "smoke01-clean.sh" to delete all the changes from smoke01.sh.
cd ~/git/copper/tests/adhoc
source smoke01-clean.sh

Verifying Services are Operational - Horizon UI and CLI

This was the part-manual procedure developed before the smoke tests. This section will be maintained so that a Horizon procedure is also documented.

1) Create image cirros-0.3.3-x86_64

glance --os-image-api-version 1 image-create --name cirros-0.3.3-x86_64 --disk-format qcow2 --location http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img --container-format bare
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | None                                 |
| container_format | bare                                 |
| created_at       | 2015-12-06T18:38:07.000000           |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 56432206-6038-48ef-a274-8cdb72b7604d |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-0.3.3-x86_64                  |
| owner            | 259ac920516144e0b0dbc3d96a49227d     |
| protected        | False                                |
| size             | 13200896                             |
| status           | active                               |
| updated_at       | 2015-12-06T18:38:08.000000           |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

2) On jumphost, create external network and subnet using Neutron CLI

  • NOTE: Assumes you have completed steps in "Installing Additional Tools"
neutron net-create public --router:external=true --provider:network_type=flat --provider:physical_network=physnet1
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | b3913813-7a0e-4e20-b102-522f9f21914b |
| mtu                       | 0                                    |
| name                      | public                               |
| provider:network_type     | flat                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  |                                      |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 66fa32b77cfe4558a2b8502f66db435e     |
+---------------------------+--------------------------------------+
neutron subnet-create --disable-dhcp public 192.168.10.0/24
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field             | Value                                              |
+-------------------+----------------------------------------------------+
| allocation_pools  | {"start": "192.168.10.2", "end": "192.168.10.254"} |
| cidr              | 192.168.10.0/24                                    |
| dns_nameservers   |                                                    |
| enable_dhcp       | False                                              |
| gateway_ip        | 192.168.10.1                                       |
| host_routes       |                                                    |
| id                | 3384e0b4-563c-4736-8296-58d5a2c72075               |
| ip_version        | 4                                                  |
| ipv6_address_mode |                                                    |
| ipv6_ra_mode      |                                                    |
| name              |                                                    |
| network_id        | b3913813-7a0e-4e20-b102-522f9f21914b               |
| subnetpool_id     |                                                    |
| tenant_id         | 66fa32b77cfe4558a2b8502f66db435e                   |
+-------------------+----------------------------------------------------+

3) Create internal network.

  • Horizon: From Project / Network / Networks, select "Create Network". Set options Name "internal", select "Next", Subnet Name "internal", Network Address "10.0.0.0/24", select "Next" (leave other options as-is or blank), Allocation Pools "10.0.0.2,10.0.0.254", DNS Name Servers "8.8.8.8", select "Create" (leave other options as-is or blank).
  • CLI:
neutron net-create internal
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 2bd18ff3-bbbe-4710-b67f-624b2ede2aea |
| mtu                       | 0                                    |
| name                      | internal                             |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 1097                                 |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 259ac920516144e0b0dbc3d96a49227d     |
+---------------------------+--------------------------------------+
neutron subnet-create internal 10.0.0.0/24 --name internal --gateway 10.0.0.1 --enable-dhcp --allocation-pool start=10.0.0.2,end=10.0.0.254 --dns-nameserver 8.8.8.8
Created a new subnet:
+-------------------+--------------------------------------------+
| Field             | Value                                      |
+-------------------+--------------------------------------------+
| allocation_pools  | {"start": "10.0.0.2", "end": "10.0.0.254"} |
| cidr              | 10.0.0.0/24                                |
| dns_nameservers   | 8.8.8.8                                    |
| enable_dhcp       | True                                       |
| gateway_ip        | 10.0.0.1                                   |
| host_routes       |                                            |
| id                | 58cb1cb5-b829-4816-b554-215bc48dfbfe       |
| ip_version        | 4                                          |
| ipv6_address_mode |                                            |
| ipv6_ra_mode      |                                            |
| name              |                                            |
| network_id        | 2bd18ff3-bbbe-4710-b67f-624b2ede2aea       |
| subnetpool_id     |                                            |
| tenant_id         | 259ac920516144e0b0dbc3d96a49227d           |
+-------------------+--------------------------------------------+

4) Create router and external port

  • Horizon: From Project / Network / Routers, select "Create Router". Set options Name "external", Connected External Network: "public", and select "Create Router" (leave other options as-is or blank).
  • CLI:
neutron router-create external
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| distributed           | False                                |
| external_gateway_info |                                      |
| ha                    | False                                |
| id                    | eb753d22-00f6-446c-b1e9-d8596b420d76 |
| name                  | external                             |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | 259ac920516144e0b0dbc3d96a49227d     |
+-----------------------+--------------------------------------+
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| distributed           | False                                |
| external_gateway_info |                                      |
| ha                    | False                                |
| id                    | eb753d22-00f6-446c-b1e9-d8596b420d76 |
| name                  | external                             |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | 259ac920516144e0b0dbc3d96a49227d     |
+-----------------------+--------------------------------------+
neutron router-gateway-set external public
Set gateway for router external

5) Add internal network interface to the router.

  • Horizon: From Project / Network / Routers, select router "external", and select "Add Interface". Select Subnet "internal", and select "Add Interface" (leave other options as-is or blank). Note: you may get disconnected from the Controller node for a short time
  • CLI:
neutron router-interface-add external subnet=internal

6) Launch images

  • Horizon: From Project / Compute / Images, select "Launch" for image "cirros-0.3.3-x86_64". Select Name "cirros-1", network "internal", and "Launch". Leave other options as-is or blank. Repeat for instance "cirros-2".
  • CLI:
neutron net-list
# copy id of network "internal" and use in the following command
nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic net-id=2bd18ff3-bbbe-4710-b67f-624b2ede2aea cirros1
+--------------------------------------+------------------------------------------------------------+
| Property                             | Value                                                      |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                     |
| OS-EXT-AZ:availability_zone          |                                                            |
| OS-EXT-SRV-ATTR:host                 | -                                                          |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                          |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                                          |
| OS-EXT-STS:power_state               | 0                                                          |
| OS-EXT-STS:task_state                | scheduling                                                 |
| OS-EXT-STS:vm_state                  | building                                                   |
| OS-SRV-USG:launched_at               | -                                                          |
| OS-SRV-USG:terminated_at             | -                                                          |
| accessIPv4                           |                                                            |
| accessIPv6                           |                                                            |
| adminPass                            | jhUuRSoTc2ZR                                               |
| config_drive                         |                                                            |
| created                              | 2015-12-06T21:31:25Z                                       |
| flavor                               | m1.tiny (1)                                                |
| hostId                               |                                                            |
| id                                   | 35738bb6-b7ee-4fd8-8260-85ad01ba2786                       |
| image                                | cirros-0.3.3-x86_64 (56432206-6038-48ef-a274-8cdb72b7604d) |
| key_name                             | -                                                          |
| metadata                             | {}                                                         |
| name                                 | cirros1                                                    |
| os-extended-volumes:volumes_attached | []                                                         |
| progress                             | 0                                                          |
| security_groups                      | default                                                    |
| status                               | BUILD                                                      |
| tenant_id                            | 259ac920516144e0b0dbc3d96a49227d                           |
| updated                              | 2015-12-06T21:31:26Z                                       |
| user_id                              | aa4afba50cba429489e44848bafa6248                           |
+--------------------------------------+------------------------------------------------------------+
nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic net-id=2bd18ff3-bbbe-4710-b67f-624b2ede2aea cirros2
+--------------------------------------+------------------------------------------------------------+
| Property                             | Value                                                      |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                     |
| OS-EXT-AZ:availability_zone          |                                                            |
| OS-EXT-SRV-ATTR:host                 | -                                                          |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                          |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000002                                          |
| OS-EXT-STS:power_state               | 0                                                          |
| OS-EXT-STS:task_state                | scheduling                                                 |
| OS-EXT-STS:vm_state                  | building                                                   |
| OS-SRV-USG:launched_at               | -                                                          |
| OS-SRV-USG:terminated_at             | -                                                          |
| accessIPv4                           |                                                            |
| accessIPv6                           |                                                            |
| adminPass                            | VPQL7bzRKyXr                                               |
| config_drive                         |                                                            |
| created                              | 2015-12-06T21:32:15Z                                       |
| flavor                               | m1.tiny (1)                                                |
| hostId                               |                                                            |
| id                                   | 1304cc75-098f-4101-9f8e-425954c5d25f                       |
| image                                | cirros-0.3.3-x86_64 (56432206-6038-48ef-a274-8cdb72b7604d) |
| key_name                             | -                                                          |
| metadata                             | {}                                                         |
| name                                 | cirros2                                                    |
| os-extended-volumes:volumes_attached | []                                                         |
| progress                             | 0                                                          |
| security_groups                      | default                                                    |
| status                               | BUILD                                                      |
| tenant_id                            | 259ac920516144e0b0dbc3d96a49227d                           |
| updated                              | 2015-12-06T21:32:16Z                                       |
| user_id                              | aa4afba50cba429489e44848bafa6248                           |
+--------------------------------------+------------------------------------------------------------+

7) Verify connectivity from VMs.

  • Horizon: From Project / Compute / Instances, select "cirros-1" and "Console", and login with account "cirros" / "cubswin:)". Ping address of cirros-2 as shown in Horizon, 192.168.1.1 (external router), and opnfv.org (to validate DNS operation).
copper/academy/joid.txt · Last modified: 2016/02/22 14:22 by Bryan Sullivan