SDN VPN project main page
The project aims to integrate the OpenStack Neutron BGPVPN project and its supported backends into the OPNFV reference platform.
Deployment options to be supported by Installers
The different BGPVPN backends should be supported through deployment options that can be selected when running the installers:
Bagpipe backend
OpenContrail backend (unclear if supported in Brahmaputra - contributors needed)
ODL backend (tests will only pass with ODL Beryllium)
ONOS is currently not supported by the OpenStack Neutron BGPVPN extension (networking-bgpvpn repository).
The tests should be written independently from the used backend, i.e. they should pass regardless which backend is deployed. For the CI pipeline we are aiming at one Jenkins job per supported backend which deploys the backend and runs the backend-independent tests against it.
Manual deployment procedures as starting point for installer development
In the following we outline a manual deployment procedure which will serve as input to installer development work (i.e.this procedure is what the installer needs to automate).
Baseline assumptions
Step 1: BGPVPN installation
BGPVPN extends the Neutron API with VPN support. Installation procedure:
Bagpipe backend
OpenContrail backend
ODL backend
ODL: activate VPN Service feature in karaf
Add to local.conf: NETWORKING_BGPVPN_DRIVER="BGPVPN:OpenDaylight:networking_bgpvpn.neutron.services.service_drivers.opendaylight.odl.OpenDaylightBgpvpnDriver:default"
Test cases and CI
The aim is to run the same suite of tests for all supported backends, i.e. within the CI pipeline a separate Jenkins job is needed for each deployment option (deploy the option, run the common tests).
Functest
Within Functest the basic functionality of the ODL VPN Service is verified using the ODL REST API directly. We re-use the robot framework test suite that has been developed for this purpose in ODL. The OpenStack Neutron BGPVPN API is verified using a second suite of robot framework tests that is located in the OPNFV SDN VPN repository (may be moved to OpenStack networking-bgpvpn repo later).
ODL VPN Service tests
Name: ODL VPN Service API tests
Description: Robot Framework tests for ODL VPN service using ODL REST API.
The ODL integration-test repository is already cloned by Functest, the only thing that needs to be done is to include the vpnservice test suite in the list of tests to be run by Functest.
Create VPN instance and check command return code
Check if VPN instance is present
Create IETF VM interface and check return code
Verify IETF VM interface
Create VPN interface for IETF interface
Verify VPN interface
Verify FIB entry after create
Delete VM VPN interface
Verify after deleting VM VPN interface
Delete VPN instance
Verify after deleting VPN instance
Delete VM IETF interface
Verify after deleting VM IETF interface
Verify FIB entry after delete
OpenStack Neutron BGPVPN API tests
Name: OpenStack Neutron BGPVPN API tests
Description: Robot Framework tests for BGPVPN Neutron API extensions
These tests will be kept in the SDN VPN repository for the time being and possibly be moved to the networking-bgpvpn repo at a later stage.
Create BGPVPN and check if the main parameters of the created object are correct.
Create BGPVPN with malformatted route target (e.g. ASN:NN) should fail.
Create BGPVPN with invalid route target (e.g. 65536:0) should fail.
Getting the VPN list works without producing an error.
Updating an existing BGPVPN works.
Displaying parameters of an existing BGPVPN works.
Deleting a BGPVPN works.
Associating an existing BGPVPN with a Neutron network works
Getting the associated Neutron network works.
Deleting the network association works.
Yardstick
2 compute nodes Node1 and Node2 are used during the tests.
Common test setup procedure:
Common test teardown procedure:
Test Case 1 - VPN provides connectivity between subnets
Name: VPN connecting Neutron networks and subnets
Description: VPNs provide connectivity across Neutron networks and subnets if configured accordingly.
Test setup procedure:
set up VM1 and VM2 on Node1 and VM3 on Node2, all having ports in the same Neutron Network N1 and all having 10.10.10/24 addresses (this subnet is denoted SN1 in the following)
Set up VM4 on Node1 and VM5 on Node2, both having ports in Neutron Network N2 and having 10.10.11/24 addresses (this subnet is denoted SN2 in the following)
Test execution:
Create VPN1 with eRT<>iRT (so that connected subnets should not reach each other) and associate SN1 to it
Ping from VM1 to VM2 should work
Ping from VM1 to VM3 should work
Ping from VM1 to VM4 should not work
Associate SN2 to VPN1
Ping from VM4 to VM5 should work
Ping from VM1 to VM4 should not work
Ping from VM1 to VM5 should not work
Change VPN 1 so that iRT=eRT
Ping from VM1 to VM4 should work
Ping from VM1 to VM5 should work
Jira task in Yardstick for the Test 1: https://jira.opnfv.org/browse/YARDSTICK-185
Test Case 2 - tenant separation
Name: Using VPNs for tenant separation
Description: Using VPNs to isolate tenants so that overlapping IP address ranges can be used
Test setup procedure:
set up VM1 and VM2 on Node1 and VM3 on Node2, all having ports in the same Neutron Network N1.
VM1 and VM2 have IP addresses in a subnet SN1 with range 10.10.10/24
VM3 has an IP address in a subnet SN2 with range 10.10.11/24
Set up VM4 on Node1 and VM5 on Node2, both having ports in Neutron Network N2
VM4 has an address in a subnet SN1b with range 10.10.10/24
VM5 has an address in a subnet SN2b with range 10.10.11/24
Test execution:
Create VPN 1 with iRT=eRT=RT1 and associate N1 to it
ping from VM1 to VM2 and VM3 should work
ping from VM1 to VM4 and VM5 should not work
Create VPN2 with iRT=eRT=RT2 and associate N2 to it
ping from VM4 to VM5 should work
ping from VM4 to VM1 and VM3 should not work
Jira task in Yardstick for the Test2: https://jira.opnfv.org/browse/YARDSTICK-192
[TM Comments:]
- I reformulated the above to account for the fact that one Neutron Subnet cannot be associated to more than one Network
- let's assume VM3 and VM5 have been allocated the same address 10.10.11.2; in that case:
"ping from VM1 to VM3 should work" translates to "on VM1 a ping to 10.10.11.2 succeeds"
"ping from VM1 to VM5 should not work" translates to "on VM1 a ping to 10.10.11.2 fails"
of course, both cannot be simultaneously true, so the test above is incorrect in that case
- I think that the right way to check for proper support of address overlap is to check that, when VM1 exchanges with 10.10.11.2, VM1 is actually talking with VM3 and not VM5. A way to achieve that is to use a real protocol connection (HTTP, SSH, netcat…) and have something returned to check who is the destination; for instance VM3 would serve a file with "I am VM3" in it, and VM5 a file with "I am VM5" in it ; VM1 would check that it received "I am VM3". Similarly VM4 can check that it talks with VM5.