User Tools

Site Tools


copper:academy:joid

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
copper:academy:joid [2016/02/08 17:23]
Bryan Sullivan
copper:academy:joid [2016/02/22 14:22] (current)
Bryan Sullivan
Line 11: Line 11:
   * Make sure you have good internet connectivity,​ as it's required for package install/​updates etc.   * Make sure you have good internet connectivity,​ as it's required for package install/​updates etc.
  
-  * Install Ubuntu 14.04.03 LTS desktop on the jumphost. Use a non-root user (ubuntu, password ​ubuntu), which by default should be part of the sudoers group.+  * Install Ubuntu 14.04.03 LTS desktop on the jumphost. Use a non-root user (opnfv, password ​you choose), which by default should be part of the sudoers group.
     * If the jumphost is attached to a gateway with DHCP disabled (as described on the [[https://​wiki.opnfv.org/​copper/​academy|Academy main page]], you will need to assign the static IP to the primary ethernet interface during Ubuntu install (since network autoconfig will fail). Use these values:     * If the jumphost is attached to a gateway with DHCP disabled (as described on the [[https://​wiki.opnfv.org/​copper/​academy|Academy main page]], you will need to assign the static IP to the primary ethernet interface during Ubuntu install (since network autoconfig will fail). Use these values:
       * IP address: 192.168.10.2       * IP address: 192.168.10.2
Line 48: Line 48:
 </​code>​ </​code>​
     * Reboot     * Reboot
- 
-  * Avoid a potential issue with incorrect locale setting (not sure if this was a mistake one time in Ubuntu install... will remove if skipping this step causes no issues normally) 
-<​code>​ 
-export LANGUAGE=en_US.UTF-8 
-export LANG=en_US.UTF-8 
-export LC_ALL=en_US.UTF-8 
-locale-gen en_US.UTF-8 
-sudo dpkg-reconfigure locales 
-</​code>​ 
  
   * Clone the joid repo   * Clone the joid repo
 <​code>​ <​code>​
 +mkdir ~/git
 +cd git
 git clone http://​gerrit.opnfv.org/​gerrit/​joid.git git clone http://​gerrit.opnfv.org/​gerrit/​joid.git
 </​code> ​ </​code> ​
  
-  * Set the correct MAC addresses on the controller and compute nodes in the template ​https://​gerrit.opnfv.org/​gerrit/​gitweb?​p=joid.git;​a=blob;​f=ci/​maas/​att/​virpod1/​deployment.yaml,​ under ~/​joid/​ci/​maas/​att/​virpod1/​deployment.yaml+  * Set the correct MAC addresses on the controller and compute nodes in the template ~/​joid/​ci/​maas/​att/​virpod1/​deployment.yaml
 <​code>​ <​code>​
-vi ~/​joid/​ci/​maas/​att/​virpod1/​deployment.yaml+vi ~/git/​joid/​ci/​maas/​att/​virpod1/​deployment.yaml
 </​code>​ </​code>​
  
   * Start MAAS and Juju bootstrap into two VMs on the jumphost   * Start MAAS and Juju bootstrap into two VMs on the jumphost
 <​code>​ <​code>​
-cd ~/joid/ci+cd ~/git/joid/ci
 ./​02-maasdeploy.sh attvirpod1 ./​02-maasdeploy.sh attvirpod1
-</​code>​ 
- 
-  * First time, you may see an error as below. Logout and login again and run the same command. (Why this occurs will be described here...) 
-<​code>​ 
-libvirt: XML-RPC error : Failed to connect socket to '/​var/​run/​libvirt/​libvirt-sock':​ Permission denied> 
-</​code>​ 
- 
-  * If you see the jumphost waiting forever for the MAAS install to complete, ala "DEBUG Executing: 'ssh -i /​home/​ubuntu/​.ssh/​id_maas -o UserKnownHostsFile=/​dev/​null -o StrictHostKeyChecking=no ubuntu@192.168.10.3 grep -m 1 "MAAS controller is now configured"​ <(sudo tail -n 1 -F /​var/​log/​cloud-init-output.log)'​ stdin=''"​ 
-    * SSH into the jumphost to verify if this error is seen, and if so start over at the "Start MAAS" step above 
-<​code>​ 
-# if not the first time you have tried to install MAAS, clear the SSH known_hosts entry 
-ssh-keygen -f "/​root/​.ssh/​known_hosts"​ -R 192.168.10.3 
-ssh ubuntu@192.168.10.3 
-ubuntu@opnfv-maas-att:​~$ tail -f /​var/​log/​cloud-init-output.log 
-    return self.cursor.execute(sql,​ params) 
-  File "/​usr/​lib/​python2.7/​dist-packages/​django/​db/​utils.py",​ line 99, in __exit__ 
-    six.reraise(dj_exc_type,​ dj_exc_value,​ traceback) 
-  File "/​usr/​lib/​python2.7/​dist-packages/​django/​db/​backends/​util.py",​ line 53, in execute 
-    return self.cursor.execute(sql,​ params) 
-django.db.utils.OperationalError:​ Problem installing fixture '/​usr/​lib/​python2.7/​dist-packages/​metadataserver/​fixtures/​initial_data.yaml':​ Could not load auth.User(pk=1):​ deadlock detected 
-DETAIL: ​ Process 18276 waits for ShareLock on transaction 1174; blocked by process 19879. 
-Process 19879 waits for AccessExclusiveLock on relation 16458 of database 16385; blocked by process 18276. 
-HINT:  See server log for query details. 
 </​code>​ </​code>​
  
Line 108: Line 78:
       * tail -f /​var/​log/​cloud-init-output.log       * tail -f /​var/​log/​cloud-init-output.log
  
-  ​* (no longer needed - merged into the juju-deployer tool) Install wakeonlan in maas node +  * As a temporary workaround for current issues with node creation by MAAS, create the controller and compute machines manually thru the MAAS gui at http://​192.168.10.3/​MAAS (ubuntu/​ubuntu) 
-<​code>​ +    * From the "​Nodes"​ panel, select "Add Hardware/​Machine",​ Machine name "node1-control", MAC Address per your controller'​s MAC, Power type "​Wake-On-LAN",​ MAC Address per your controller'​s MAC, and click "Save Machine"​. 
-ssh ubuntu@192.168.10.3 +      * You should see the machine power-on. You will be returned to the "​Nodes"​ panel. Select "node1-control", "​Edit",​ set Tags "​control",​ and click "Save Changes"​. 
-sudo apt-get install -y wakeonlan +    * Repeat for your compute node, naming it "node2-compute", specify its MAC, and set the Tags to "​compute"​
-</​code>​ +
- +
-  ​* As a temporary workaround for wakeonlan not being installed in the maas VM image, create the controller and compute machines manually thru the MAAS gui at http://​192.168.10.3/​MAAS (ubuntu/​ubuntu) +
-    * From the "​Nodes"​ panel, select "Add Hardware/​Machine"​python-neutronclient, Machine name "controller1", MAC Address per your controller'​s MAC, Power type "​Wake-On-LAN",​ MAC Address per your controller'​s MAC, and click "Save Machine"​. +
-      * You should see the machine power-on. You will be returned to the "​Nodes"​ panel. Select "controller1.maas", "​Edit",​ set Tags "​control",​ and click "Save Changes"​. +
-    * Repeat for your compute node, naming it "compute1", specify its MAC, and set the Tags to "​compute"​+
  
-  * When both nodes are in the "​Ready"​ state in the MAAS gui (they will have been powered off by MAAS in after completion),​ enable IP configuration for controller1's second NIC (eth1). This is a workaround; setup of this NIC should be automated (how TBD) +  * When both nodes are in the "​Ready"​ state in the MAAS gui (they will have been powered off by MAAS in after completion),​ enable IP configuration for the controller's second NIC (eth1). This is a workaround; setup of this NIC should be automated (how TBD) 
-    * From the "​Nodes"​ panel, ​selection ​"controller1" and scrll down to the "​Network"​ section. For "IP address"​ of eth1, select "Auto assign"​.+    * From the "​Nodes"​ panel, ​select ​"node1-control" and scroll ​down to the "​Network"​ section. For "IP address"​ of eth1, select "Auto assign"​.
  
   * Run the OPNFV deploy via Juju   * Run the OPNFV deploy via Juju
 <​code>​ <​code>​
-cd ~/joid/ci+cd ~/git/joid/ci
 ./deploy.sh -o liberty -s odl -t nonha -l attvirpod1 ​ ./deploy.sh -o liberty -s odl -t nonha -l attvirpod1 ​
 </​code>​ </​code>​
Line 149: Line 113:
   * However, it's likely not yet complete as the juju charms take a while (30 minutes or more) to fully configure and bring up the services to the "​active"​ status. You can see the current status from the command:   * However, it's likely not yet complete as the juju charms take a while (30 minutes or more) to fully configure and bring up the services to the "​active"​ status. You can see the current status from the command:
 <​code>​ <​code>​
-juju status --format=tabular +watch juju status --format=tabular
-[Services] ​            +
-NAME                  STATUS ​     EXPOSED CHARM                                   +
-ceilometer ​           waiting ​    ​false ​  ​local:​trusty/​ceilometer-44 ​             +
-ceilometer-agent ​                 false   ​local:​trusty/​ceilometer-agent-20 ​       +
-ceph                  active ​     false   ​cs:​trusty/​ceph-42 ​                      +
-cinder ​               maintenance false   ​cs:​trusty/​cinder-31 ​                    +
-cinder-ceph ​                      ​false ​  ​cs:​trusty/​cinder-ceph-14 ​               +
-glance ​               waiting ​    ​false ​  ​local:​trusty/​glance-150 ​                +
-heat                  waiting ​    ​false ​  ​local:​trusty/​heat-12 ​                   +
-juju-gui ​             unknown ​    ​false ​  ​cs:​trusty/​juju-gui-41 ​                  +
-keystone ​             waiting ​    ​false ​  ​local:​trusty/​keystone-0 ​                +
-mongodb ​              ​unknown ​    ​false ​  ​cs:​trusty/​mongodb-28  +
-mysql                 ​unknown ​    ​false ​  ​cs:​trusty/​mysql-31 ​                     +
-neutron-api ​          ​waiting ​    ​false ​  ​local:​trusty/​neutron-api-1 ​             +
-neutron-api-odl ​                  ​false ​  ​local:​trusty/​neutron-api-odl-0 ​         +
-neutron-gateway ​      ​active ​     false   ​local:​trusty/​neutron-gateway-64 ​        +
-nodes-api ​            ​active ​     false   ​local:​trusty/​ubuntu-nodes-controller-1  +
-nodes-compute ​        ​active ​     false   ​local:​trusty/​ubuntu-nodes-compute-1 ​    +
-nova-cloud-controller maintenance false   ​local:​trusty/​nova-cloud-controller-501  +
-nova-compute ​         active ​     false   ​local:​trusty/​nova-compute-133 ​          +
-ntp                               ​false ​  ​cs:​trusty/​ntp-14 ​                       +
-odl-controller ​       unknown ​    ​false ​  ​local:​trusty/​odl-controller-0 ​          +
-openstack-dashboard ​  ​waiting ​    ​false ​  ​local:​trusty/​openstack-dashboard-32 ​    +
-openvswitch-odl ​                  ​false ​  ​local:​trusty/​openvswitch-odl-0 ​         +
-rabbitmq-server ​      ​active ​     false   ​local:​trusty/​rabbitmq-server-150 ​       +
- +
-(followed by more detail)+
 </​code>​ </​code>​
  
-  * When it completes, you can get the addresses of the installed services by:+  * Once all the services are either in state "​active"​ or "​unknown"​ (which just means that the JuJu charm for the service does not yet implement status reporting), install should be complete. ​When it completes, you can get the addresses of the installed services by:
 <​code>​ <​code>​
 juju status --format=short juju status --format=short
Line 208: Line 145:
 </​code>​ </​code>​
  
-  * To see the detailed status: 
-<​code>​ 
-juju status 
-</​code>​ 
 ===== Installing Additional Tools ===== ===== Installing Additional Tools =====
  
copper/academy/joid.1454952182.txt.gz · Last modified: 2016/02/08 17:23 by Bryan Sullivan