User Tools

Site Tools


copper:academy:apex

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
copper:academy:apex [2016/02/02 19:38]
Bryan Sullivan
copper:academy:apex [2016/02/12 17:33] (current)
Bryan Sullivan
Line 2: Line 2:
 The procedure below has been used to create a 3-node OPNFV install using Intel NUC i7 nodes with 16GB RAM, 250MB SSD, and 1 TB HDD. The install uses one NUC for the Jumphost, and one each for the controller (OpenStack + ODL) and the compute nodes. The procedure below has been used to create a 3-node OPNFV install using Intel NUC i7 nodes with 16GB RAM, 250MB SSD, and 1 TB HDD. The install uses one NUC for the Jumphost, and one each for the controller (OpenStack + ODL) and the compute nodes.
  
-==== Jumphost Installation and OPNFV Deployment ​====+===== Apex-Based Basic Install =====
  
-  * Includes instructions from http://​artifacts.opnfv.org/​genesis/​foreman/​docs/​installation-instructions.html +Basic install guidelines: ​http://​artifacts.opnfv.org/​apex/​docs/​installation-instructions/baremetal.html 
-  ​* ​ install Centos 7 x86_64 minimal server: CentOS-7-x86_64-Minimal-1503-01.iso ​ + 
-  *  ​during ​install, add user opnfv +  * Make sure you have good internet connectivity,​ as it's required for package ​install/updates etc.
-  *  once active, login to opnfv account from console  +
-<​code>​ +
-ip addr +
-</code>+
  
-  *  ​add ​opnfv account to sudoers after "root    ALL=(ALL      ALL"+  * Install Centos 7 on the jumphost.  
 +    * This procedure assumes you are installing the Centos 7 Minimal Server ISO 
 +    * Pick a root password, and create a non-root user "opnfv" 
 +    * Select hostname "​jumphost"​ 
 +    * If the jumphost is attached to a gateway with DHCP disabled ​(as described on the [[https://​wiki.opnfv.org/​copper/​academy|Academy main page]], you will need to assign the static IP to the primary ethernet interface during Ubuntu install (since network autoconfig will fail). Use these values: 
 +      * IP address: 192.168.10.2 
 +      * Netmask: 255.255.255.0 
 +      * Gateway 192.168.10.1 
 +      * DNS: 8.8.8.8 8.8.4.4 
 +  * Give sudo access to the opnfv account
 <​code>​ <​code>​
 su su
 visudo visudo
 +(add line)
 opnfv   ​ALL=(ALL) ​      ALL opnfv   ​ALL=(ALL) ​      ALL
 </​code>​ </​code>​
- +  ​Update the system ​and reboot.
-  ​ ​configure hostname +
-<​code>​ +
-sudo vi /​etc/​hostname +
-jumphost1.opnfv.org +
-</​code>​ +
- +
-  *  Disable NetworkManager +
-<​code>​ +
-sudo systemctl stop NetworkManager +
-sudo systemctl disable NetworkManager +
-</​code>​ +
- +
-  *  configure single NIC as static per IP assigned during install +
-<​code>​ +
-sudo vi /​etc/​sysconfig/​network-scripts/​ifcfg-enp0s25 +
-TYPE="​Ethernet"​ +
-BOOTPROTO="​static"​ +
-IPADDR=192.168.10.2 +
-NETMASK=255.255.255.0 +
-GATEWAY=192.168.10.1 +
-NM_CONTROLLED="​no"​ +
-ONBOOT=yes +
-(rest as-is) +
-</​code>​ +
- +
-  *  Restart networking +
-<​code>​ +
-sudo service network restart +
-sudo setenforce 0 +
-</​code>​ +
- +
-  *  Edit /​etc/​resolv.conf ​and add a nameserver +
-<​code>​ +
-sudo vi /etc/resolv.conf +
-nameserver 8.8.8.8 +
-nameserver 8.8.4.4 +
-</​code>​ +
- +
-  *  update+
 <​code>​ <​code>​
 sudo yum -y update sudo yum -y update
 </​code>​ </​code>​
- +  ​If desired, install the GNOME desktop (recommended since you will likely ​be using the jumphost for other things...)
-  ​ ​reboot and select updated kernel so correct kernel headers can be obtained in virtualbox setup +
-<​code>​ +
-sudo shutdown -r 0 +
-</​code>​ +
- +
-  *  Select boot option: CentOS Linux (3.10.0-229.14.1.e17.x86_647 (Core) +
- +
-  *  Disable selinux: +
-<​code>​ +
-sudo setenforce 0 +
-sudo sed -i '​s/​SELINUX=.*/​SELINUX=permissive/'​ /​etc/​selinux/​config +
-</​code>​ +
- +
-  *  Disable firewalld:​ +
-<​code>​ +
-sudo systemctl stop firewalld +
-sudo systemctl disable firewalld +
-</​code>​ +
- +
-  *  install and start ntp +
-<​code>​ +
-sudo yum -y install ntp +
-sudo systemctl start ntpd +
-date +
-</​code>​ +
- +
-  * If desired (recommended),​ install gnome desktop+
 <​code>​ <​code>​
 sudo yum -y groupinstall "GNOME Desktop"​ sudo yum -y groupinstall "GNOME Desktop"​
-sudo ln -sf /​lib/​systemd/​system/​runlevel5.target /​etc/​systemd/​system/​default.target +sudo systemctl set-default ​graphical.target 
-(reboot)+sudo systemctl start graphical.target
 </​code>​ </​code>​
- +  ​Install prerequisites
-  ​verify time is correct +
-  * Set NUCs to PXE boot on wake-on-lan +
-  * https://​help.ubuntu.com/​community/​WakeOnLan +
-  * In BIOS options (F2), Power tab, set "Wake on LAN from S4/S5" to "Power On - PXE Boot" and save (F10) +
-  * Boot NUCs and note MAC addresses, so they can be included in the wakenodes.sh script referenced below +
-  * install ether-wake if needed to test of wakenodes.sh+
 <​code>​ <​code>​
-sudo yum -y install net-tools+sudo yum -y groupinstall "​Virtualization Host"​ 
 +sudo service libvirtd start 
 +sudo chkconfig libvirtd on
 </​code>​ </​code>​
  
-  *  install git+  *  install git and create git folder where you will clone repos as needed
 <​code>​ <​code>​
 sudo yum -y install git sudo yum -y install git
-cd ~+mkdir ~/git 
 +git clone https://​gerrit.opnfv.org/​gerrit/​apex
 </​code>​ </​code>​
  
-  *  clone genesis +==== Install RDO and Apex RPMs ====
-<​code>​ +
-git clone https://​blsaws@gerrit.opnfv.org/​gerrit/​genesis +
-</​code>​+
  
-  *  (for testing, downloaded trozet'​s patch fork snapshot from https://gerrit.opnfv.org/gerrit/​gitweb?​p=genesis.git;​a=commit;​h=756ee8c81cfac9a69e8f67811429e63da9af6480 ​ +  * Download and save the latest RPMs on the [[http://artifacts.opnfv.org/|OPNFV artifacts page]] 
-<​code>​ +    ​* ​opnfv-apex-x.y-yyyymmdd.noarch.rpm 
-curl "​https://​gerrit.opnfv.org/​gerrit/​gitweb?​p=genesis.git;​a=snapshot;​h=756ee8c81cfac9a69e8f67811429e63da9af6480;​sf=tgz" ​-o genesis-756ee8c.tar.gz +    * opnfv-apex-common-x.y-yyyymmdd.noarch.rpm 
-gzip -d genesis-756ee8c.tar.gz +    * opnfv-apex-undercloud-x.y-yyyymmdd.noarch.rpm
-tar -xvf genesis-756ee8c.tar +
-</​code>​+
  
-  *  ​modify to clone khaleesi from my fork (trozet=>​blsaws) per the patches below +  * Install ​the RDO RPM then the Apex RPMs
-  *  note: if you want to use the wakenodes.sh script, you will need to fork my repo and mod wakenodes.sh for your MACs+
 <​code>​ <​code>​
-vi ~/genesis-756ee8c/​foreman/​ci/​bootstrap.sh +sudo yum install ​-https://www.rdoproject.org/repos/rdo-release.rpm 
-if ! git clone -b opnfv https://github.com/blsaws/khaleesi.git; then+cd ~/​Downloads 
 +# ! the next command assumes only the Apex RPMs are in your download location... order does not matter 
 +sudo yum install -y *.rpm
 </​code>​ </​code>​
  
-  *  modify ~/​genesis-756ee8c/​foreman/​ci/​opnfv_ksgen_settings_no_HA.yml for my specific config +==== Creating a Node Inventory File ====
-<​code>​ +
-vi ~/​genesis-756ee8c/​foreman/​ci/​opnfv_ksgen_settings_no_HA.yml +
-# Jumphost1:​ +
-  # (for compute1:​) +
-    name: oscompute1.{{ domain_name }} +
-    hostname: oscompute1.{{ domain_name }} +
-    short_name: oscompute1 +
-    mac_address:​ "<​compute1-mac>"​ +
-  # (for controller1:​) +
-    name: oscontroller1.{{ domain_name }} +
-    hostname: oscontroller1.{{ domain_name }} +
-    short_name: oscontroller1 +
-    mac_address:​ "<​controller1-mac>"​ +
-    private_mac:​ "<​controller1-mac>"​ +
-# Jumphost2:​ +
-  # (for compute1:​) +
-    name: oscompute1.{{ domain_name }} +
-    hostname: oscompute1.{{ domain_name }} +
-    short_name: oscompute1 +
-    mac_address:​ "<​compute1-mac>"​ +
-  # (for controller1:​) +
-    name: oscontroller1.{{ domain_name }} +
-    hostname: oscontroller1.{{ domain_name }} +
-    short_name: oscontroller1 +
-    mac_address:​ "<​controller1-mac>"​ +
-    private_mac:​ "<​controller1-mac>"​ +
-</​code>​+
  
-  * clone my fork of trozet'​s khaleesi, updated with patches +This section explains further ​the instructions at [[http://artifacts.opnfv.org/apex/docs/installation-instructions/​baremetal.html#​creating-a-node-inventory-file|Creating a Node Inventory File]].
-  * it will later be cloned by the foreman vm, so these patches will be available in the foreman vm +
-<​code>​ +
-cd /opt +
-sudo git clone -b opnfv https://github.com/blsaws/khaleesi.git +
-</code>+
  
-  * during initial debugging, the patches were manually created as below; later they were pulled from the repo fork after being committed +  * Based upon the examples in Apex repo folder ~/​git/​apex/​config/​inventorycreate ​the inventory file ~/inventory.yaml
-  * patch 1: add wakenodes.sh to root of khaleesi repo +
-    * alternative is to manually power-on the nodes, which will go into PXE-boot mode+
 <​code>​ <​code>​
-sudo vi /opt/​khaleesi/​wakenodes.sh +vi ~/inventory.yaml 
-# !/bin/bash +nodes: 
-yum -y install net-tools +  ​node1:​ 
-ether-wake ​<compute1-mac> +    ​mac_address:​ "<controler ​mac>" 
-ether-wake ​<controller1-mac>+    cpus: 2 
 +    memory: 16384 
 +    disk: 1024 
 +    arch: "​x86_64"​ 
 +    capabilities:​ "​profile:​control"​ 
 +  node2: 
 +    mac_address:​ "<compute ​mac>
 +    cpus: 2 
 +    memory: 16384 
 +    disk: 1024 
 +    arch: "​x86_64"​ 
 +    capabilities:​ "​profile:​compute"​
 </​code>​ </​code>​
  
-   * During initial debugging wakenodes.sh was copied to the shared folder for the foreman vm and later copied to the root of the cloned repo in the foreman vm. +==== Creating ​the Settings Files ====
-     * (in jumphost) sudo cp /​opt/​khaleesi/​wakenodes.sh /​var/​opt/​opnfv/​foreman_vm +
-     * (after foreman vm is up) +
-       * su; cd /​var/​opt/​opnfv/​foreman_vm;​ vagrant ssh; cp /​vagrant/​wakenodes.sh /​opt/​khaleesi/​wakenodes.sh +
-  * patch 2: in main.yml add call to wakenodes.sh script as shown in http://​docs.ansible.com/​ansible/​script_module.html +
-    * In khaleesi/​roles/​get_nodes/​foreman/​tasks/​main.yml add call to node wakeup script after block "- name: Provision nodes"​ +
-<​code>​ +
-sudo vi /​opt/​khaleesi/​roles/​get_nodes/​foreman/​tasks/​main.yml  +
-- script: /​opt/​khaleesi/​wakenodes.sh +
-</​code>​ +
- +
-    * During initial debugging main.yml was copied to the shared folder for the foreman vm and later copied to the cloned repo in the foreman vm. +
-      * (in jumphost) sudo cp /​opt/​khaleesi/​roles/​get_nodes/​foreman/​tasks/​main.yml /​var/​opt/​opnfv/​foreman_vm/​main.yml +
-      * (after foreman vm is up) +
-        * su; cd /​var/​opt/​opnfv/​foreman_vm;​ vagrant ssh; cp /​vagrant/​main.yml /​opt/​khaleesi/​roles/​get_nodes/​foreman/​tasks/​main.yml +
-  * patch 3: in foreman.py skip IPMI related code in node bringup +
-    * In khaleesi/​library/​foreman.py exit early to avoid IPMI code +
-    * change "elif ipmi_host is None:" to "if ipmi_host is None:" and add module.exit before i +
-<​code>​ +
-sudo vi /​opt/​khaleesi/​library/​foreman.py +
-# bryan_att modified to skip IPMI stuff +
-module.exit_json(changed=True, msg="​Rebuilding Node"​) +
-# change elif to if so the module.exit is outside the previous if block +
-if ipmi_host is None: +
-</​code>​+
  
-    * During initial debugging foreman.py was copied to the shared folder and then to the foreman VM as above. +This section explains further ​the instructions at [[http://artifacts.opnfv.org/apex/docs/installation-instructions/baremetal.html#​creating-the-settings-files|Creating the Settings Files]].
-      * (in jumphost) sudo cp /opt/khaleesi/​library/​foreman.py /var/opt/opnfv/foreman_vm/ +
-      * (after foreman vm is up) +
-        * su; cd /var/opt/​opnfv/​foreman_vm;​ vagrant ssh; cp /​vagrant/​foreman.py /​opt/​khaleesi/​library/​foreman.py+
  
-  * kickoff ​deploy.sh+  * Based upon the examples in Apex repo folder ~/​git/​apex/​config/​deploy, create the deploy settings file ~/​deploy_settings.yaml 
 +    * Note: the below assumes the IPMI settings should be deleted, but it's not clear yet how to enable wake-on-lan instead
 <​code>​ <​code>​
-cd ~/genesis-756ee8c/​foreman/​ci/​ +vi ~/deploy_settings.yaml 
-sudo ./deploy.sh -single_baremetal_nic enp0s25 -base_config /​home/​opnfv/​genesis-756ee8c/​foreman/​ci/​opnfv_ksgen_settings_no_HA.yml +global_params:​ 
-</​code>​+  ha_enabled: false
  
-  * if errors, before retrying wipe using clean.sh or +deploy_options:​ 
-<​code>​ +  ​sdn_controller:​ opendaylight 
-su +  ​sdn_l3:​ false 
-cd /​var/​opt/​opnfv/​foreman_vm +  ​tacker:​ false 
-vagrant destroy -f +  ​congress:​ false 
-cd - +  sfc: false 
-rm -rf /​var/​opt/​opnfv + </​code>​
-exit +
-rm /​home/​opnfv/​.ssh/​known_hosts +
-</​code>​+
  
-  * when you see "​PASSED",​ login to controller node and setup bridge to external network for VMs (manual tasks needed for non-HA single NIC installs. for HA single-NIC installs this is handled already). ​+  * Enable wake-on-lan
 <​code>​ <​code>​
-vi /etc/neutron/plugin.ini +cp /usr/bin/opnfv-deploy ~/​opnfv-deploy 
-# add to end +vi ~/​opnfv-deploy 
-[ovs] +(change line as below and save) 
-bridge_mappings = physnet1:br-ex +          ​\"​pm_type\"​\"pxe_wol\", 
-# comment out "flat_networks =*" ​and add under it +sudo cp ~/​opnfv-deploy /​usr/​bin/​opnfv-deploy
-flat_networks = physnet1+
 </​code>​ </​code>​
- 
-  * Restart Neutron 
-<​code>​ 
-openstack-service restart neutron 
-</​code>​ 
- 
-  * Create /​etc/​sysconfig/​network-scripts/​ifcfg-br-ex with https://​github.com/​trozet/​puppet-trystack/​blob/​quickstack/​templates/​br_ex.erb as template. 
-    * Note: the following assumes that Foreman assigned IP 192.168.1.204 to controller1. If different, use the IP assigned for the system you are installing, here and below. 
-<​code>​ 
-vi /​etc/​sysconfig/​network-scripts/​ifcfg-br-ex 
-DEVICE=br-ex 
-DEVICETYPE=ovs 
-IPADDR=192.168.1.204 
-NETMASK=255.255.255.0 
-GATEWAY=192.168.1.1 
-BOOTPROTO=static 
-ONBOOT=yes 
-TYPE=OVSBridge 
-PROMISC=yes 
-PEERDNS=no 
-</​code>​ 
- 
-  * Verify ovs is setup correctly 
-<​code>​ 
-[root@oscontroller1 ~]# ovs-vsctl show 
-22ba4760-889c-4341-b8d6-445c53ac5aaa 
-    Manager "​tcp:​192.168.1.204:​6640"​ 
-        is_connected:​ true 
-    Bridge br-ex 
-        Controller "​tcp:​192.168.1.204:​6633"​ 
-            is_connected:​ true 
-        Port "​enp0s25"​ 
-            Interface "​enp0s25"​ 
-        Port br-ex 
-            Interface br-ex 
-                type: internal 
-    Bridge br-int 
-        Controller "​tcp:​192.168.1.204:​6633"​ 
-            is_connected:​ true 
-        fail_mode: secure 
-        Port br-int 
-            Interface br-int 
-    ovs_version:​ "​2.3.1"​ 
-</​code>​ 
- 
-  * Modify /​etc/​sysconfig/​network-scripts/​ifcfg-enp0s25 
-<​code>​ 
-vi /​etc/​sysconfig/​network-scripts/​ifcfg-enp0s25 
-NAME="​enp0s25"​ 
-DEVICE="​enp0s25"​ 
-ONBOOT=yes 
-NETBOOT=yes 
-(leave UUID line as is, replace rest with the below) 
-BOOTPROTO=static 
-TYPE=OVSPort 
-OVS_BRIDGE=br-ex 
-PROMISC=yes 
-IPV4_FAILURE_FATAL=no 
-PEERDNS=no 
-PEERROUTES=yes 
-</​code>​ 
- 
-  * Restart networking 
-<​code>​ 
-systemctl restart network 
-</​code>​ 
- 
-  * On the jumphost, from Horizon / Project / Compute / Access & Security / API Access select "​Download OpenStack RC file, transfer to the controller root home directory, then execute it. 
-<​code>​ 
-vi admin-openrc.sh 
-(paste contents) 
-source admin-openrc.sh 
-</​code>​ 
- 
-  * Create external network and subnet using Neutron CLI 
-<​code>​ 
-neutron net-create external1 -- --router:​external=true --provider:​network_type=flat --provider:​physical_network=physnet1 
-Created a new network: 
-+---------------------------+--------------------------------------+ 
-| Field                     | Value                                | 
-+---------------------------+--------------------------------------+ 
-| admin_state_up ​           | True                                 | 
-| id                        | d7868a94-47ec-4ec7-93cc-645d3bc45898 | 
-| name                      | external1 ​                           | 
-| provider:​network_type ​    | flat                                 | 
-| provider:​physical_network | physnet1 ​                            | 
-| provider:​segmentation_id ​ |                                      | 
-| router:​external ​          | True                                 | 
-| shared ​                   | False                                | 
-| status ​                   | ACTIVE ​                              | 
-| subnets ​                  ​| ​                                     | 
-| tenant_id ​                | c3b15e900f0f4c7ab01576bb28d34f10 ​    | 
-+---------------------------+--------------------------------------+ 
-neutron subnet-create --disable-dhcp external1 192.168.1.0/​24 
-Created a new subnet: 
-+-------------------+--------------------------------------------------+ 
-| Field             | Value                                            | 
-+-------------------+--------------------------------------------------+ 
-| allocation_pools ​ | {"​start":​ "​192.168.1.2",​ "​end":​ "​192.168.1.254"​} | 
-| cidr              | 192.168.1.0/​24 ​                                  | 
-| dns_nameservers ​  ​| ​                                                 | 
-| enable_dhcp ​      | False                                            | 
-| gateway_ip ​       | 192.168.1.1 ​                                     | 
-| host_routes ​      ​| ​                                                 | 
-| id                | 2a6a6be2-2ea2-4a38-84cd-3a2e9e3197e8 ​            | 
-| ip_version ​       | 4                                                | 
-| ipv6_address_mode |                                                  | 
-| ipv6_ra_mode ​     |                                                  | 
-| name              |                                                  | 
-| network_id ​       | d7868a94-47ec-4ec7-93cc-645d3bc45898 ​            | 
-| tenant_id ​        | c3b15e900f0f4c7ab01576bb28d34f10 ​                | 
-+-------------------+--------------------------------------------------+ 
-</​code>​ 
- 
-===== What install success looks like ===== 
- 
-  * when the process has finished, succcess is indicated by this type of information in the jumphost terminal (the details e.g. addresses shown here may vary from that above... sometimes installs resulted in different assigned addresses) 
-<​code>​ 
-==> default: 
-==> default: TASK: [get_nodes/​foreman | make a list] *************************************** 
-==> default: ​                    [[ previous task time: 0:​00:​00.015421 = 0.02s / 1597.81s ]] 
-==> default: ok: [localhost] 
-==> default: 
-==> default: TASK: [get_nodes/​foreman | debug var=nodes_list] ****************************** 
-==> default: ​                    [[ previous task time: 0:​00:​00.008785 = 0.01s / 1597.81s ]] 
-==> default: ok: [localhost] => { 
-==> default: ​    "​var":​ { 
-==> default: ​        "​nodes_list":​ [ 
-==> default: ​            "​oscontroller1.opnfv.com",​ 
-==> default: ​            "​oscompute1.opnfv.com"​ 
-==> default: ​        ] 
-==> default: ​    } 
-==> default: } 
-==> default: 
-==> default: TASK: [get_nodes/​foreman | Wait for puppet to complete] *********************** 
-==> default: ​                    [[ previous task time: 0:​00:​00.008461 = 0.01s / 1597.82s ]] 
-==> default: changed: [localhost] 
-==> default: 
-==> default: msg: 
-==> default: Nodes are Active 
-==> default: 
-==> default: TASK: [get_nodes/​foreman | Print host openstack network type (nova/​neutron)] ​ 
-*** 
-==> default: ​                  [[ previous task time: 0:​12:​01.201807 = 721.20s / 2319.02s ]] 
-==> default: ok: [localhost] => { 
-==> default: ​    "​var":​ { 
-==> default: ​        "​provisioner.network.type":​ "​nova"​ 
-==> default: ​    } 
-==> default: } 
-==> default: 
-==> default: TASK: [get_nodes/​foreman | debug var=nodes_created] *************************** 
-==> default: ​                    [[ previous task time: 0:​00:​00.010205 = 0.01s / 2319.03s ]] 
-==> default: skipping: [localhost] 
-==> default: 
-==> default: TASK: [get_nodes/​foreman | debug var=hostvars] ******************************** 
-==> default: ​                    [[ previous task time: 0:​00:​00.023453 = 0.02s / 2319.06s ]] 
-==> default: skipping: [localhost] 
-==> default: 
-==> default: PLAY RECAP ******************************************************************** 
-==> default: localhost ​                 : ok=60   ​changed=41 ​  ​unreachable=0 ​   failed=0 
-==> default: ​                    [[ previous task time: 0:​00:​00.032833 = 0.03s / 2319.09s ]] 
-==> default: ​                 [[ previous play time: 0:​26:​37.803885 = 1597.80s / 2319.09s ]] 
-==> default: ​             [[ previous playbook time: 0:​38:​39.090976 = 2319.09s / 2319.09s ]] 
-==> default: ​                   [[ previous total time: 0:​38:​39.091181 = 2319.09s / 0.00s ]] 
-==> default: Exit cleanup ... init.print_result 
-==> default: ​    ​running:​ init.print_result 
-==> default: ./run.sh: PASSED 
-==> default: Running provisioner:​ shell... 
-    default: Running: /​tmp/​vagrant-shell20150920-11687-22sn6.sh 
-==> default: Resizing physical volume 
-==> default: ​  ​Physical volume "/​dev/​sda2"​ changed 
-==> default: ​  1 physical volume(s) resized / 0 physical volume(s) not resized 
-==> default: New physical volume size: 39 
-==> default: Resizing logical volume 
-==> default: ​  ​Extending logical volume root to 38.48 GiB 
-==> default: ​  ​Logical volume root successfully resized 
-==> default: Filesystem resized to: 39G 
-Foreman VM is up! 
-[opnfv@jumphost2 ci]$ 
-</​code>  ​ 
copper/academy/apex.1454441900.txt.gz · Last modified: 2016/02/02 19:38 by Bryan Sullivan