User Tools

Site Tools


ipv6_opnfv_project:create_networks

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
ipv6_opnfv_project:create_networks [2015/11/02 09:44]
Sridhar Gaddam
ipv6_opnfv_project:create_networks [2015/12/30 08:40] (current)
Sridhar Gaddam
Line 6: Line 6:
 Steps to be executed on the snsj113 (IP: 198.59.156.113) machine (''​refer to [[http://​spirent.app.box.com/​notes/​37713582266?​s=tgm98glcerw9zlcdy6pclavjlorgop4v|the details of lab environment]]''​). Steps to be executed on the snsj113 (IP: 198.59.156.113) machine (''​refer to [[http://​spirent.app.box.com/​notes/​37713582266?​s=tgm98glcerw9zlcdy6pclavjlorgop4v|the details of lab environment]]''​).
  
-1. Login as odl user and change directory to devstack.+1. Login as odl user and source the credentials
  
    cd ~/devstack    cd ~/devstack
 +   ​odl@opnfv-openstack-ubuntu:​~/​devstack$ source openrc admin demo   
 +2. Clone the opnfv_os_ipv6_poc repo.
  
-2Source the credentials. +   git clone https://​github.com/​sridhargaddam/​opnfv_os_ipv6_poc.git /opt/stack/opnfv_os_ipv6_poc
- +
-   ​odl@opnfv-openstack-ubuntu:​~/devstack$ source openrc admin demo+
  
 3. We want to manually create networks/​subnets that will help us to achieve the POC, so we have used the flag, ''​NEUTRON_CREATE_INITIAL_NETWORKS=False''​ in ''​local.conf''​ file. When this flag is set to False, devstack does not create any networks/​subnets during the setup phase. ​ 3. We want to manually create networks/​subnets that will help us to achieve the POC, so we have used the flag, ''​NEUTRON_CREATE_INITIAL_NETWORKS=False''​ in ''​local.conf''​ file. When this flag is set to False, devstack does not create any networks/​subnets during the setup phase. ​
Line 84: Line 84:
    ​neutron router-interface-add ipv6-router ipv4-int-subnet2    ​neutron router-interface-add ipv6-router ipv4-int-subnet2
  
-5. Download ''​fedora20''​ image which would be used as vRouter.+5. Download ''​fedora22''​ image which would be used as vRouter.
  
-   ​glance image-create --name 'Fedora20' --disk-format qcow2 --container-format bare --is-public true --copy-from ​http://cloud.fedoraproject.org/​fedora-20.x86_64.qcow2 ​  +   ​glance image-create --name 'Fedora22' --disk-format qcow2 --container-format bare --is-public true --copy-from ​https://download.fedoraproject.org/pub/fedora/​linux/​releases/​22/​Cloud/​x86_64/​Images/​Fedora-Cloud-Base-22-20150521.x86_64.qcow2 ​  
  
 6. Create a keypair. 6. Create a keypair.
Line 92: Line 92:
    nova keypair-add vRouterKey > ~/​vRouterKey    nova keypair-add vRouterKey > ~/​vRouterKey
  
-7. Copy the contents of the following url to ''​metadata.txt''​ (i.e., metadata which enables IPv6 router functionality inside vRouter)+7. Create some Neutron ports that would be used for vRouter and the two VMs.
  
-   http://fpaste.org/​276989/​39903414/​+   neutron port-create --name eth0-vRouter --mac-address fa:16:​3e:​11:​11:​11 ipv4-int-network2 
 +   ​neutron port-create --name eth1-vRouter --mac-address fa:​16:​3e:​22:​22:​22 ipv4-int-network1 
 +   ​neutron port-create --name eth0-VM1 --mac-address fa:​16:​3e:​33:​33:​33 ipv4-int-network1 
 +   ​neutron port-create --name eth0-VM2 --mac-address fa:​16:​3e:​44:​44:​44 ipv4-int-network1 
 +       
 +8Boot the vRouter using ''​Fedora22''​ image on the Compute node (hostname: opnfv-odl-ubuntu)
  
-8. Boot the vRouter using ''​Fedora20'' ​image on the Compute node (hostname: opnfv-odl-ubuntu)+   nova boot --image Fedora22 --flavor m1.small --user-data /​opt/​stack/​opnfv_os_ipv6_poc/​metadata.txt --availability-zone nova:​opnfv-odl-ubuntu ​--nic port-id=$(neutron port-list | grep -w eth0-vRouter | awk '​{print $2}'--nic port-id=$(neutron port-list | grep -w eth1-vRouter | awk '​{print $2}') --key-name vRouterKey vRouter ​
  
-   nova boot --image Fedora20 --flavor m1.small --user-data=./​metadata.txt --availability-zone nova:​opnfv-odl-ubuntu --nic net-id=$(neutron net-list | grep -w ipv4-int-network2 | awk '​{print $2}') --nic net-id=$(neutron net-list | grep -w ipv4-int-network1 | awk '​{print $2}') --key-name vRouterKey vRouter +9. Verify that vRouter ​boots up successfully and the ssh keys are properly injected.
- +
-9. Verify that Fedora20 image boots up successfully and the ssh keys are properly injected.+
  
 ''​Note:​ It may take few minutes for the necessary packages to get installed and ssh keys to be injected.''​ ''​Note:​ It may take few minutes for the necessary packages to get installed and ssh keys to be injected.''​
Line 119: Line 122:
 ''​Note:​ VM1 is created on Control+Network node (i.e., opnfv-openstack-ubuntu)''​ ''​Note:​ VM1 is created on Control+Network node (i.e., opnfv-openstack-ubuntu)''​
  
-    nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic net-id=$(neutron ​net-list | grep -w ipv4-int-network1 ​| awk '​{print $2}') --availability-zone nova:​opnfv-openstack-ubuntu --key-name vRouterKey VM1+    nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron ​port-list | grep -w eth0-VM1 | awk '​{print $2}') --availability-zone nova:​opnfv-openstack-ubuntu --key-name vRouterKey ​--user-data /​opt/​stack/​opnfv_os_ipv6_poc/​set_mtu.sh ​VM1
  
 ''​Note:​ VM2 is created on Compute node (i.e., opnfv-odl-ubuntu). We will have to configure appropriate mtu on the VM iface by taking into account the tunneling overhead and any physical switch requirements. If so, push the mtu to the VM either using dhcp options or via meta-data. ''​ ''​Note:​ VM2 is created on Compute node (i.e., opnfv-odl-ubuntu). We will have to configure appropriate mtu on the VM iface by taking into account the tunneling overhead and any physical switch requirements. If so, push the mtu to the VM either using dhcp options or via meta-data. ''​
  
-    nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic net-id=$(neutron ​net-list | grep -w ipv4-int-network1 ​| awk '​{print $2}') --availability-zone nova:​opnfv-odl-ubuntu --key-name vRouterKey VM2+    nova boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny --nic port-id=$(neutron ​port-list | grep -w eth0-VM2 | awk '​{print $2}') --availability-zone nova:​opnfv-odl-ubuntu --key-name vRouterKey ​--user-data /​opt/​stack/​opnfv_os_ipv6_poc/​set_mtu.sh ​VM2
  
 11. Confirm that both the VMs are successfully booted. 11. Confirm that both the VMs are successfully booted.
Line 138: Line 141:
 # Configure the IPv6 address on the <​qr-xxx>​ iface. # Configure the IPv6 address on the <​qr-xxx>​ iface.
  
-    router_interface=$(ip a s | grep -w "​global qr-*" | awk '​{print $7}')+    ​export ​router_interface=$(ip a s | grep -w "​global qr-*" | awk '​{print $7}')
     ip -6 addr add 2001:​db8:​0:​1::​1 dev $router_interface     ip -6 addr add 2001:​db8:​0:​1::​1 dev $router_interface
  
-Copy the following contents to some file (say ''/​tmp/​br-ex.radvd.conf''​) +Update ​the sample ​radvd.conf ​file with the $router_interface ​and spawn ''​radvd''​ daemon ​inside the namespace ​to simulate an external ​IPv6 router. ​
- +
-    interface ​$router_interface +
-    { +
-       ​AdvSendAdvert on; +
-       ​MinRtrAdvInterval 3; +
-       ​MaxRtrAdvInterval 10; +
-       ​prefix 2001:​db8:​0:​1::/​64 +
-       { +
-          AdvOnLink on; +
-          AdvAutonomous on; +
-       }; +
-    }; +
- +
-# Spawn a ''​radvd''​ daemon to simulate an external router.  +
- +
-    $radvd -C /​tmp/​br-ex.radvd.conf -p /​tmp/​br-ex.pid.radvd -m syslog +
- +
-You will also have to add an ipv6 route which points to the ''​eth0''​ interface of vRouter. This is necessary for the ipv6 router to know that ''​2001:​db8:​0:​2::/​64''​ is reachable via the ''​eth0''​ interface of vRouter.+
  
-    ​ip -6 route add 2001:​db8:​0:​2::​/64 via <LLA-of-eth0-iface-on-vRouter> dev <qr-iface>+    ​cp /​opt/​stack/​opnfv_os_ipv6_poc/​scenario2/​radvd.conf /​tmp/​radvd.$router_interface.conf 
 +    sed -i 's/$router_interface/'​$router_interface'/​g'​ /​tmp/​radvd.$router_interface.conf 
 +    $radvd ​-C /​tmp/​radvd.$router_interface.conf ​-p /tmp/br-ex.pid.radvd ​-m syslog
  
-# Verify that the route is properly added+Configure ​the downstream ​route pointing to the eth0 iface of vRouter 
 +    ip -6 route add 2001:​db8:​0:​2::/​64 via 2001:​db8:​0:​1:​f816:​3eff:​fe11:​1111
  
 +    Note: The routing table should now look something similar to shown below. ​
     ip -6 route show     ip -6 route show
 +    2001:​db8:​0:​1::​1 dev qr-42968b9e-62 ​ proto kernel ​ metric 256
 +    2001:​db8:​0:​1::/​64 dev qr-42968b9e-62 ​ proto kernel ​ metric 256  expires 86384sec
 +    2001:​db8:​0:​2::/​64 via 2001:​db8:​0:​1:​f816:​3eff:​fe11:​1111 dev qr-42968b9e-62 ​ proto ra  metric 1024  expires 29sec
 +    fe80::/64 dev qg-3736e0c7-7c ​ proto kernel ​ metric 256
 +    fe80::/64 dev qr-42968b9e-62 ​ proto kernel ​ metric 256
  
 Now, let us ssh to one of the VMs (say VM1) to confirm that it has successfully configured the IPv6 address using SLAAC with prefix from vRouter. Now, let us ssh to one of the VMs (say VM1) to confirm that it has successfully configured the IPv6 address using SLAAC with prefix from vRouter.
ipv6_opnfv_project/create_networks.1446457463.txt.gz · Last modified: 2015/11/02 09:44 by Sridhar Gaddam