User Tools

Site Tools


copper:academy:joid

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
copper:academy:joid [2016/02/07 19:04]
Bryan Sullivan
copper:academy:joid [2016/02/22 14:22] (current)
Bryan Sullivan
Line 11: Line 11:
   * Make sure you have good internet connectivity,​ as it's required for package install/​updates etc.   * Make sure you have good internet connectivity,​ as it's required for package install/​updates etc.
  
-  * Install Ubuntu 14.04.03 LTS desktop on the jumphost. Use a non-root user (ubuntu, password ​ubuntu), which by default should be part of the sudoers group.+  * Install Ubuntu 14.04.03 LTS desktop on the jumphost. Use a non-root user (opnfv, password ​you choose), which by default should be part of the sudoers group.
     * If the jumphost is attached to a gateway with DHCP disabled (as described on the [[https://​wiki.opnfv.org/​copper/​academy|Academy main page]], you will need to assign the static IP to the primary ethernet interface during Ubuntu install (since network autoconfig will fail). Use these values:     * If the jumphost is attached to a gateway with DHCP disabled (as described on the [[https://​wiki.opnfv.org/​copper/​academy|Academy main page]], you will need to assign the static IP to the primary ethernet interface during Ubuntu install (since network autoconfig will fail). Use these values:
       * IP address: 192.168.10.2       * IP address: 192.168.10.2
Line 25: Line 25:
 chmod 744 ~/.ssh chmod 744 ~/.ssh
 </​code>​ </​code>​
-==== If you don't have two interfaces on the jumphost ==== 
  
-  ​* The examples below assume a config as per the [[https://​wiki.opnfv.org/​copper/​academy|Academy main page]] +  * Create a bridge brAdm on eth0, by modifying /​etc/​network/​interfaces file on jumphost. Subnet "​192.168.10.x"​ is used here to avoid conflicts with commonly used subnets in LAN environments. ​
-    ​* Create a bridge brAdm on eth0, by modifying /​etc/​network/​interfaces file on jumphost. Subnet "​192.168.10.x"​ is used here to avoid conflicts with commonly used subnets in LAN environments. ​+
  
 <​code>​ <​code>​
Line 50: Line 48:
 </​code>​ </​code>​
     * Reboot     * Reboot
- 
-==== Continuing with the normal install ==== 
- 
-  * Avoid a potential issue with incorrect locale setting (not sure if this was a mistake one time in Ubuntu install... will remove if skipping this step causes no issues normally) 
-<​code>​ 
-export LANGUAGE=en_US.UTF-8 
-export LANG=en_US.UTF-8 
-export LC_ALL=en_US.UTF-8 
-locale-gen en_US.UTF-8 
-sudo dpkg-reconfigure locales 
-</​code>​ 
  
   * Clone the joid repo   * Clone the joid repo
 <​code>​ <​code>​
 +mkdir ~/git
 +cd git
 git clone http://​gerrit.opnfv.org/​gerrit/​joid.git git clone http://​gerrit.opnfv.org/​gerrit/​joid.git
 </​code> ​ </​code> ​
  
-  * Set the correct MAC addresses on the controller and compute nodes in the template ​https://​gerrit.opnfv.org/​gerrit/​gitweb?​p=joid.git;​a=blob;​f=ci/​maas/​att/​virpod1/​deployment.yaml,​ under ~/​joid/​ci/​maas/​att/​virpod1/​deployment.yaml+  * Set the correct MAC addresses on the controller and compute nodes in the template ~/​joid/​ci/​maas/​att/​virpod1/​deployment.yaml
 <​code>​ <​code>​
-vi ~/​joid/​ci/​maas/​att/​virpod1/​deployment.yaml+vi ~/git/​joid/​ci/​maas/​att/​virpod1/​deployment.yaml
 </​code>​ </​code>​
  
   * Start MAAS and Juju bootstrap into two VMs on the jumphost   * Start MAAS and Juju bootstrap into two VMs on the jumphost
 <​code>​ <​code>​
-cd ~/joid/ci+cd ~/git/joid/ci
 ./​02-maasdeploy.sh attvirpod1 ./​02-maasdeploy.sh attvirpod1
-</​code>​ 
- 
-  * First time, you may see an error as below. Logout and login again and run the same command. (Why this occurs will be described here...) 
-<​code>​ 
-libvirt: XML-RPC error : Failed to connect socket to '/​var/​run/​libvirt/​libvirt-sock':​ Permission denied> 
-</​code>​ 
- 
-  * If you see the jumphost waiting forever for the MAAS install to complete, ala "DEBUG Executing: 'ssh -i /​home/​ubuntu/​.ssh/​id_maas -o UserKnownHostsFile=/​dev/​null -o StrictHostKeyChecking=no ubuntu@192.168.10.3 grep -m 1 "MAAS controller is now configured"​ <(sudo tail -n 1 -F /​var/​log/​cloud-init-output.log)'​ stdin=''"​ 
-    * SSH into the jumphost to verify if this error is seen, and if so start over at the "Start MAAS" step above 
-<​code>​ 
-# if not the first time you have tried to install MAAS, clear the SSH known_hosts entry 
-ssh-keygen -f "/​root/​.ssh/​known_hosts"​ -R 192.168.10.3 
-ssh ubuntu@192.168.10.3 
-ubuntu@opnfv-maas-att:​~$ tail -f /​var/​log/​cloud-init-output.log 
-    return self.cursor.execute(sql,​ params) 
-  File "/​usr/​lib/​python2.7/​dist-packages/​django/​db/​utils.py",​ line 99, in __exit__ 
-    six.reraise(dj_exc_type,​ dj_exc_value,​ traceback) 
-  File "/​usr/​lib/​python2.7/​dist-packages/​django/​db/​backends/​util.py",​ line 53, in execute 
-    return self.cursor.execute(sql,​ params) 
-django.db.utils.OperationalError:​ Problem installing fixture '/​usr/​lib/​python2.7/​dist-packages/​metadataserver/​fixtures/​initial_data.yaml':​ Could not load auth.User(pk=1):​ deadlock detected 
-DETAIL: ​ Process 18276 waits for ShareLock on transaction 1174; blocked by process 19879. 
-Process 19879 waits for AccessExclusiveLock on relation 16458 of database 16385; blocked by process 18276. 
-HINT:  See server log for query details. 
 </​code>​ </​code>​
  
Line 112: Line 78:
       * tail -f /​var/​log/​cloud-init-output.log       * tail -f /​var/​log/​cloud-init-output.log
  
-  ​* (no longer needed - merged into the juju-deployer tool) Install wakeonlan in maas node +  * As a temporary workaround for current issues with node creation by MAAS, create the controller and compute machines manually thru the MAAS gui at http://​192.168.10.3/​MAAS (ubuntu/​ubuntu) 
-<​code>​ +    * From the "​Nodes"​ panel, select "Add Hardware/​Machine",​ Machine name "node1-control", MAC Address per your controller'​s MAC, Power type "​Wake-On-LAN",​ MAC Address per your controller'​s MAC, and click "Save Machine"​. 
-ssh ubuntu@192.168.10.3 +      * You should see the machine power-on. You will be returned to the "​Nodes"​ panel. Select "node1-control", "​Edit",​ set Tags "​control",​ and click "Save Changes"​. 
-sudo apt-get install -y wakeonlan +    * Repeat for your compute node, naming it "node2-compute", specify its MAC, and set the Tags to "​compute"​
-</​code>​ +
- +
-  ​* As a temporary workaround for wakeonlan not being installed in the maas VM image, create the controller and compute machines manually thru the MAAS gui at http://​192.168.10.3/​MAAS (ubuntu/​ubuntu) +
-    * From the "​Nodes"​ panel, select "Add Hardware/​Machine"​python-neutronclient, Machine name "controller1", MAC Address per your controller'​s MAC, Power type "​Wake-On-LAN",​ MAC Address per your controller'​s MAC, and click "Save Machine"​. +
-      * You should see the machine power-on. You will be returned to the "​Nodes"​ panel. Select "controller1.maas", "​Edit",​ set Tags "​control",​ and click "Save Changes"​. +
-    * Repeat for your compute node, naming it "compute1", specify its MAC, and set the Tags to "​compute"​+
  
-  * When both nodes are in the "​Ready"​ state in the MAAS gui (they will have been powered off by MAAS in after completion),​ enable IP configuration for controller1's second NIC (eth1). This is a workaround; setup of this NIC should be automated (how TBD) +  * When both nodes are in the "​Ready"​ state in the MAAS gui (they will have been powered off by MAAS in after completion),​ enable IP configuration for the controller's second NIC (eth1). This is a workaround; setup of this NIC should be automated (how TBD) 
-    * From the "​Nodes"​ panel, ​selection ​"controller1" and scrll down to the "​Network"​ section. For "IP address"​ of eth1, select "Auto assign"​.+    * From the "​Nodes"​ panel, ​select ​"node1-control" and scroll ​down to the "​Network"​ section. For "IP address"​ of eth1, select "Auto assign"​.
  
   * Run the OPNFV deploy via Juju   * Run the OPNFV deploy via Juju
 <​code>​ <​code>​
-cd ~/joid/ci+cd ~/git/joid/ci
 ./deploy.sh -o liberty -s odl -t nonha -l attvirpod1 ​ ./deploy.sh -o liberty -s odl -t nonha -l attvirpod1 ​
 </​code>​ </​code>​
Line 153: Line 113:
   * However, it's likely not yet complete as the juju charms take a while (30 minutes or more) to fully configure and bring up the services to the "​active"​ status. You can see the current status from the command:   * However, it's likely not yet complete as the juju charms take a while (30 minutes or more) to fully configure and bring up the services to the "​active"​ status. You can see the current status from the command:
 <​code>​ <​code>​
-juju status --format=tabular +watch juju status --format=tabular
-[Services] ​            +
-NAME                  STATUS ​     EXPOSED CHARM                                   +
-ceilometer ​           waiting ​    ​false ​  ​local:​trusty/​ceilometer-44 ​             +
-ceilometer-agent ​                 false   ​local:​trusty/​ceilometer-agent-20 ​       +
-ceph                  active ​     false   ​cs:​trusty/​ceph-42 ​                      +
-cinder ​               maintenance false   ​cs:​trusty/​cinder-31 ​                    +
-cinder-ceph ​                      ​false ​  ​cs:​trusty/​cinder-ceph-14 ​               +
-glance ​               waiting ​    ​false ​  ​local:​trusty/​glance-150 ​                +
-heat                  waiting ​    ​false ​  ​local:​trusty/​heat-12 ​                   +
-juju-gui ​             unknown ​    ​false ​  ​cs:​trusty/​juju-gui-41 ​                  +
-keystone ​             waiting ​    ​false ​  ​local:​trusty/​keystone-0 ​                +
-mongodb ​              ​unknown ​    ​false ​  ​cs:​trusty/​mongodb-28  +
-mysql                 ​unknown ​    ​false ​  ​cs:​trusty/​mysql-31 ​                     +
-neutron-api ​          ​waiting ​    ​false ​  ​local:​trusty/​neutron-api-1 ​             +
-neutron-api-odl ​                  ​false ​  ​local:​trusty/​neutron-api-odl-0 ​         +
-neutron-gateway ​      ​active ​     false   ​local:​trusty/​neutron-gateway-64 ​        +
-nodes-api ​            ​active ​     false   ​local:​trusty/​ubuntu-nodes-controller-1  +
-nodes-compute ​        ​active ​     false   ​local:​trusty/​ubuntu-nodes-compute-1 ​    +
-nova-cloud-controller maintenance false   ​local:​trusty/​nova-cloud-controller-501  +
-nova-compute ​         active ​     false   ​local:​trusty/​nova-compute-133 ​          +
-ntp                               ​false ​  ​cs:​trusty/​ntp-14 ​                       +
-odl-controller ​       unknown ​    ​false ​  ​local:​trusty/​odl-controller-0 ​          +
-openstack-dashboard ​  ​waiting ​    ​false ​  ​local:​trusty/​openstack-dashboard-32 ​    +
-openvswitch-odl ​                  ​false ​  ​local:​trusty/​openvswitch-odl-0 ​         +
-rabbitmq-server ​      ​active ​     false   ​local:​trusty/​rabbitmq-server-150 ​       +
- +
-(followed by more detail)+
 </​code>​ </​code>​
  
-  * When it completes, you can get the addresses of the installed services by:+  * Once all the services are either in state "​active"​ or "​unknown"​ (which just means that the JuJu charm for the service does not yet implement status reporting), install should be complete. ​When it completes, you can get the addresses of the installed services by:
 <​code>​ <​code>​
 juju status --format=short juju status --format=short
Line 212: Line 145:
 </​code>​ </​code>​
  
-  * To see the detailed status: 
-<​code>​ 
-juju status 
-</​code>​ 
 ===== Installing Additional Tools ===== ===== Installing Additional Tools =====
  
Line 234: Line 163:
 </​code>​ </​code>​
  
-===== Verifying Services are Operational =====+===== Verifying ​OpenStack and ODL Services are Operational =====
  
 1) On the jumphost browse to, and verify that these services are active (e.g. per the example above): 1) On the jumphost browse to, and verify that these services are active (e.g. per the example above):
Line 250: Line 179:
 </​code>​ </​code>​
  
-2) Create image cirros-0.3.3-x86_64+===== Verifying Services are Operational - Smoke Tests ===== 
 + 
 +The following procedure will verify that the basic OPNFV services are operational. 
 + 
 +  * Clone the copper project repo: 
 +<​code>​ 
 +mkdir ~/git 
 +cd ~/git 
 +git clone https://​gerrit.opnfv.org/​gerrit/​copper 
 +</​code>​ 
 + 
 +  * Execute "​smoke01.sh",​ which uses the OpenStack CLI to 
 +    * Create a glance image (cirros) 
 +    * Create public/​private network/​subnet 
 +    * Create an external router, gateway, and interface 
 +    * Boot two cirros instances on the private network 
 +<​code>​ 
 +cd ~/​git/​copper/​tests/​adhoc 
 +source smoke01.sh 
 +</​code>​ 
 + 
 +  * From Horizon / Project / Compute / Instances, select "​cirros-1"​ and "​Console",​ and login with account "​cirros"​ / "​cubswin:​)"​. Ping address of cirros-as shown in Horizon, 192.168.10.1 (external router), and opnfv.org (to validate DNS operation). 
 + 
 +  * Execute "​smoke01-clean.sh"​ to delete all the changes from smoke01.sh. 
 +<​code>​ 
 +cd ~/​git/​copper/​tests/​adhoc 
 +source smoke01-clean.sh 
 +</​code>​ 
 + 
 +===== Verifying Services are Operational - Horizon UI and CLI ===== 
 + 
 +This was the part-manual procedure developed before the smoke tests. This section will be maintained so that a Horizon procedure is also documented. 
 + 
 +1) Create image cirros-0.3.3-x86_64
   * Horizon: From Project / Compute / Images, select "​Create Image"​. Set options Name "​cirros-0.3.3-x86_64",​ Image Location "​http://​download.cirros-cloud.net/​0.3.3/​cirros-0.3.3-x86_64-disk.img",​ Format "​QCOW2",​ Architecture "​x86_64",​ (leave other options as-is or blank), and select "​Create Image"​.   * Horizon: From Project / Compute / Images, select "​Create Image"​. Set options Name "​cirros-0.3.3-x86_64",​ Image Location "​http://​download.cirros-cloud.net/​0.3.3/​cirros-0.3.3-x86_64-disk.img",​ Format "​QCOW2",​ Architecture "​x86_64",​ (leave other options as-is or blank), and select "​Create Image"​.
   * CLI:   * CLI:
Line 278: Line 240:
 </​code>​ </​code>​
  
-3) On jumphost, create external network and subnet using Neutron CLI+2) On jumphost, create external network and subnet using Neutron CLI
   * NOTE: Assumes you have completed steps in "​Installing Additional Tools"   * NOTE: Assumes you have completed steps in "​Installing Additional Tools"
 <​code>​ <​code>​
Line 314: Line 276:
 | ipv6_address_mode |                                                    | | ipv6_address_mode |                                                    |
 | ipv6_ra_mode ​     |                                                    | | ipv6_ra_mode ​     |                                                    |
-| name              |                    ​neutron router-create ​                               ​|+| name              |                                                    |
 | network_id ​       | b3913813-7a0e-4e20-b102-522f9f21914b ​              | | network_id ​       | b3913813-7a0e-4e20-b102-522f9f21914b ​              |
 | subnetpool_id ​    ​| ​                                                   | | subnetpool_id ​    ​| ​                                                   |
Line 321: Line 283:
 </​code>​ </​code>​
  
-4) Create internal network.+3) Create internal network.
   * Horizon: From Project / Network / Networks, select "​Create Network"​. Set options Name "​internal",​ select "​Next",​ Subnet Name "​internal",​ Network Address "​10.0.0.0/​24",​ select "​Next"​ (leave other options as-is or blank), Allocation Pools "​10.0.0.2,​10.0.0.254",​ DNS Name Servers "​8.8.8.8",​ select "​Create"​ (leave other options as-is or blank).   * Horizon: From Project / Network / Networks, select "​Create Network"​. Set options Name "​internal",​ select "​Next",​ Subnet Name "​internal",​ Network Address "​10.0.0.0/​24",​ select "​Next"​ (leave other options as-is or blank), Allocation Pools "​10.0.0.2,​10.0.0.254",​ DNS Name Servers "​8.8.8.8",​ select "​Create"​ (leave other options as-is or blank).
   * CLI:   * CLI:
Line 365: Line 327:
 </​code>​ </​code>​
  
-5) Create router and external port+4) Create router and external port
   * Horizon: From Project / Network / Routers, select "​Create Router"​. Set options Name "​external",​ Connected External Network: "​public",​ and select "​Create Router"​ (leave other options as-is or blank).   * Horizon: From Project / Network / Routers, select "​Create Router"​. Set options Name "​external",​ Connected External Network: "​public",​ and select "​Create Router"​ (leave other options as-is or blank).
   * CLI:   * CLI:
Line 402: Line 364:
 </​code>​ </​code>​
  
-6) Add internal network interface to the router.+5) Add internal network interface to the router.
   * Horizon: From Project / Network / Routers, select router "​external",​ and select "Add Interface"​. Select Subnet "​internal",​ and select "Add Interface"​ (leave other options as-is or blank). Note: you may get disconnected from the Controller node for a short time   * Horizon: From Project / Network / Routers, select router "​external",​ and select "Add Interface"​. Select Subnet "​internal",​ and select "Add Interface"​ (leave other options as-is or blank). Note: you may get disconnected from the Controller node for a short time
   * CLI:   * CLI:
Line 409: Line 371:
 </​code>​ </​code>​
  
-7) Launch images+6) Launch images
   * Horizon: From Project / Compute / Images, select "​Launch"​ for image "​cirros-0.3.3-x86_64"​. Select Name "​cirros-1",​ network "​internal",​ and "​Launch"​. Leave other options as-is or blank. Repeat for instance "​cirros-2"​.   * Horizon: From Project / Compute / Images, select "​Launch"​ for image "​cirros-0.3.3-x86_64"​. Select Name "​cirros-1",​ network "​internal",​ and "​Launch"​. Leave other options as-is or blank. Repeat for instance "​cirros-2"​.
   * CLI:   * CLI:
Line 485: Line 447:
 </​code>​ </​code>​
  
-8) Verify connectivity from VMs. +7) Verify connectivity from VMs. 
   * Horizon: From Project / Compute / Instances, select "​cirros-1"​ and "​Console",​ and login with account "​cirros"​ / "​cubswin:​)"​. Ping address of cirros-2 as shown in Horizon, 192.168.1.1 (external router), and opnfv.org (to validate DNS operation).   * Horizon: From Project / Compute / Instances, select "​cirros-1"​ and "​Console",​ and login with account "​cirros"​ / "​cubswin:​)"​. Ping address of cirros-2 as shown in Horizon, 192.168.1.1 (external router), and opnfv.org (to validate DNS operation).
  
copper/academy/joid.1454871849.txt.gz · Last modified: 2016/02/07 19:04 by Bryan Sullivan