This is an old revision of the document!
Project “OPNFV – Base system functionality testing” will provide comprehensive testing methodology, test suites and test cases to test and verify OPNFV Platform functionality that covers the VIM and NFVI components.
This project uses a "tops-down" approach that will start with chosen ETSI NFV use-case/s and open source VNFs for the functional testing. The approach taken will be to
This project will develop test suites that cover detailed functional test cases, test methodologies and platform configurations which will be documented and maintained in a repository for use by other OPNFV testing projects and the community in general. Developing test suites will also help lay the foundation for a test automation framework that in future can be used by the continuation integration (CI) project (Octopus). We envisage that certain VNF deployment use-cases could be automatically tested as an optional step of the CI process.
The project targets testing of the OPNFV platform in a hosted test-bed environment (i.e. using the OPNFV test labs world wide). It will leverage output of the "BGS" project.
The key objectives are:
“OPNFV – Base system functionality testing” will deliver a functional testing framework along with a set of test suites and test cases to test and verify the functionality OPNFV platform. The testing framework (tools, test-cases, etc.) are also intended to be used by the CI framework for the purpose of qualifying the OPNFV platform on bare metal servers. In this context, OPNFV Tester will use open source VNF components. Functional testing includes
The project requires the following components:
Intel POD2 (contact Trevor Cooper) is dedicated to functional testing.
Functional tests shall be
TODO: shall we be more prescritive on the toloing environment (creation of the VM, installation of the tools)?
for release 1 we target the automation of the following tests
List of testcases can be found here
At the end of a Fresh install, the status of the OPNFV solutions according to the selected installer can be summarized as follow:
Fuel | Foreman | |
---|---|---|
Images | none | |
Networks | none | |
Flavors | ||
OpenStack creds | admin/octopus |
We re-used Rally scenario to test OpenStack (bench + Tempest) The default scenario are dealing with the following modules: authenticate, nova, cinder, glance, keystone, neutron, quotas, requests, tempest-do-not-run-against-production, heat, mistral, sahara, vm, ceilometer, designate, dummy, zaqar
The first ones (authenticate, nova, cinder, glance, keystone, neutron, quotas, requests, tempest-do-not-run-against-production) can be re-used "as provided".
However scenario shall be tuned especially for the bench
Bench suite | Orange POD | Ericsson POD | LF POD1 | LF POD2 |
---|---|---|---|---|
Glance | 1 test KO (GlanceImages.create_image_and_boot_instances) +call sh file, watch path in scenario) | 100% OK | - | KO (400 Bad Request URL invalid) |
Cinder | all KO (config?) | 100% OK | - | 80% OK (some Time out) |
Heat | N.A | 33% OK | - | N.A |
Nova | all KO (config?) except create and list keypair and create and delete keypair | ~50% OK. Problems with live migration and security groups | - | ~50% OK. Problems with live migration and security groups |
Authenticate | OK | 100% OK | - | 100% OK |
Keystone | OK | 100% OK | - | 100% OK |
Neutron | OK | some tests 100% and some other 29% success, but maybe because of limited range of range for neutron. Maybe 100 times is too much stress for a small environment. | - | 80% OK |
VM | KO (config / floating IP) | KO. Probably due to network setup. | - | KO (floating IP) |
Quotas | OK | 100% OK | - | 100% OK |
Request | OK | 100% OK | - | KO (IncompleteRead ~ TimeOut) |
tempest is a special case of scenarios that can be run by Rally.
Tempest | Orange POD | Ericsson POD | LF POD1 | LF POD2 |
---|---|---|---|---|
smoke | 20 failures on 108 tests | 33 Failures on 84 tests | ||
all | 170 failures on 951 tests | 243 Failures on 875 |
Note: during first manual launched on alpha Orange platform installed with opensteak installer, there were lots of errors (196) running Tempest scenario and some in Rally scenario (results to be analyzed)
Studies on the testcase shall be done
<Peter>
<Malla>
<Andrew & Martin> see vIMS Functional Testing for details.
See Octopus etherpad: https://etherpad.opnfv.org/p/octopus
Community platforms connected to CI
Jira ref | Documentation | Manual test | Automated | Doc | BGS link | Comments | |
---|---|---|---|---|---|---|---|
Rally Bench | https://jira.opnfv.org/browse/FUNCTEST-1 | installation procedure described https://github.com/Orange-OpenSource/opnfv/blob/master/docs/TEST.md | OK | OK | KO | Installed on Jump host server of Intel POD 2 #1; Rally natively integrated in Fuell #2;Tested with opensteack #3; | Morgan. Rally Test suite can be performed but flavours, images are missing on Openstack SUT deployed on POD 2 |
Tempest | https://jira.opnfv.org/browse/FUNCTEST-2 | OK | KO | KO | Use of khalisi for foreman/puppet #1; Rally natively integrated in Fuell #2;Tested with opensteack #3; | Tempest not working on POD - same issue than on #3, patch applied but seems there is a pb ⇒ contact openstack-rally. | |
vPing | https://jira.opnfv.org/browse/FUNCTEST-3 | KO | KO | KO | Malla | ||
vIMS | https://jira.opnfv.org/browse/FUNCTEST-4 | Based on Clearwater solution | KO | KO | KO | Martin see vIMS Functional Testing for details | |
ODL | https://jira.opnfv.org/browse/FUNCTEST-5 | ? | ? | KO | Peter ⇒ https://etherpad.opnfv.org/p/robotframework |
A new page has been created to list the task for functest beyond R1.
Project deliverable: The project delivers the following components:
OPNFV release #1.