User Tools

Site Tools


opnfv_test_dashboard

This is an old revision of the document!


Note working page ...no decision yet...to be discussed during functest and test weekly meeting

Introduction

All the test projects generate results in different format. The goal of a testing dashboard is to provide a consistent view of the different tests from the different projects.

Overview

We may describe the overall system dealing with test and testbeds as follow:

Each test project shall:

  • declare itself using test collection API (see Description)
  • declare the testcases
  • push the results into the test DB

The test collection API is under review, it consists in a simple Rest API associated with simple json format collected in a Mongo DB.

A module shall extract the raw results pushed by the test projects to create a testing dashboard. This dashboard shall be able to show:

  • The number of testcases/test projects/ people involved/organization involved
  • All the tests executed on a a given POD
  • All the tests from a test project
  • All the testcases executed on all the POD
  • All the testcases in critical state for version X installed with installer I on testbed T
  • ….

It shall be possible to filter per:

  • POD
  • Test Project
  • Testcase
  • OPNFV version
  • Installer

For each test case we shall be able to see:

  • Error rate versus time
  • Duration versus time
  • Packet loss
  • Delay
  • ….

And also the severity (to be commonly agreed) of the errors…

  • critical
  • major
  • minor

Open Questions

The test projects shall agree on

  • the data model dealing with the test dashboarding (testbed / Testcase / test project / Test results / version / installers / ..)
  • the filter criteria
  • the label of the graphs (if it shall be customizable/possible to add new label but all the test projects shall use the same)

First Illustration

Based on Arno, related to the result collection API could look like:

  • test-projects.json: description of the tests projects
  • test-functest.json: description of functest test project
  • test-functest-vping.json: raw result of 3 executions of vPing on 2 PODs (LF1 and LF2)

test-projects.json (example):

{
  "id": 1,
  "name": "functest",    
  "description":"functional testing for OPNFV. OPNFV seen as a black box."
},
{
  "id": 2,
  "name": "yardstick",    
  "description":"in VNF testing framework"
},
{
  "id": 3,
  "name": "qtip",    
  "description":"performance testing framework"
},
{
  "id": 4,
  "name": "storeperf",    
  "description":"storage performance"
},
{
  "id": 5,
  "name": "vsperf",    
  "description":"Virtual Switch testing"
}
 

test-functest.json (example):

{
  "id":1.1,
  "name":"vPing",
  "testproject": "1",
  "test": "VPing",
  "description": "Virtual Ping"
},
{
  "id":1.2,
  "name":"ODL",
  "testproject": "1",
  "test": "",
  "description": "OpenDaylight functional test suite"
},
{
  "id":1.3,
  "name":"Rally",
  "testproject": "1",
  "test": "",
  "description": "Rally Bench test suites for OpenStack"
},
{
  "id":1.4,
  "name":"Tempest",
  "testproject": "1",
  "test": "",
  "description": "OpenStack functional test suite"
},
...

test-functest-vping.json (example):

{
  "id": "1.1.1",
  "testcase": "1.1",
  "timestamp": "1434517568",
  "platform":"LF1",
  "global_status":"OK",
  "hardware_details":"",
  "installer_type":"Fuel",
  "version":"Arno R1",
  "details":{
    "full_duration"=31
  }
},
{
  "id": "1.1.2",
  "testcase": "1.1",
  "timestamp": "1435182906",
  "platform":"LF2",
  "global_status":"OK",
  "hardware_details":"",
  "installer_type":"Foreman",
  "version":"Arno R1",
  "details":{
    "full_duration"=102
  }
},
 {
  "id": "1.1.3",
  "testcase": "1.1",
  "timestamp": "1434836687",
  "platform":"LF2",
  "global_status":"OK",
  "hardware_details":"",
  "installer_type":"Foreman",
  "version":"Arno R1",
  "details":{
    "full_duration"=116
  }
}, 

Functest used to run automatically 4 suites:

  • vPing
  • ODL
  • Rally
  • Rally

The results are available on Jenkins Then we may imagine that functest call the API to store the raw results of each tests. The raw results will be stored in the Mongo DB (json files) A module (to be created) will perform post processing (cron) to generate json file usable for dashboarding (like in bitergia). In the exemple we build json file for LF2 POD (we could do it for the last 30 days, last 365 days, since the beginning)

Such files could look like:

  • vPing-LF2.json
  • ODL-LF2.json
  • Rally-LF2.json
  • Tempest-LF2.json

functest-vPing-LF2.json (example):

{
 "id": [0, 1, 2, 3, 4, 5, 6, 7 ], 
 "timestamp": [1430987654, 1430987832, 1431234567,1431234999,1432123456,1432184561, 1434836687,1435182906 ],
 "results":{
   "duration": [111, , 100, 103, 101, 122, 105, 116], 
   "status": ["OK", "KO", "OK", "OK", "OK", "OK", "OK", "OK"], 
 }
}

functest-tempest-LF2.json (example):

{

"id": [0, 1, 2, 3, 4, 5, 6, 7 ], 
"timestamp": [1430987654, 1430987832, 1431234567,1431234999,1432123456,1432184561, 1434836687,1435182906 ],
"results":{
   "nb_tests": [105, 92, 104, 100, , 101, 102, 105], 
   "nb_failures": [32, 34, 41, 32, , 28 , 30 ,29],
   "duration": [56, 62, 64, 58, , 61, 63, 58], 
   "status": ["OK", "OK", "OK", "OK", "Critical", "OK", "OK", "OK"], 
  }
}

functest-odl-LF2.json (example):

{
 "id": [0, 1, 2, 3, 4, 5, 6, 7 ],
 "timestamp": [1430987654, 1430987832, 1431234567,1431234999,1432123456,1432184561, 1434836687,1435182906 ],
 "results":{ 
   "nb_tests": [18, 18, 18, 18, , 18, 18, 18], 
   "nb_failures": [15, 15, 3, 3, 3 , 3 , 3 ,3],
   "duration": [23, 25, 18, 19, 18, 20, 18, 18], 
   "status": ["OK", "OK", "OK", "OK", "OK", "OK", "OK", "OK"], 
  }
}

functest-rally-LF2.json (example only 2 modules partially used here):

{
 "timestamp": [1433279100, 1433279122 ],
 "id": [0, 1 ], 
 "modules": [
   {
     "module_name": "authenticate",
     "scenarios": ["keystone", "validate_cinder", "validate_glance", "validate_heat", "validate_neutron" , "validate_nova"],
     "results": [
       {
         "load_duration": [3.2, 0.95, 1.8, 1, 0.42 , 0.68],
         "full_duration": [17.1, 6.1, 6.64, 6.68, 5.27, 5.62], 
         "iterations": [100, 10, 10, 10 ,10 ,10], 
         "errors": [0, 0, 0, 0, 0, 0, 0], 
         "sla" : ["OK", "OK","OK","OK","OK","OK" ]
       },
       {
         "load_duration": [3.1, 1.42, 2.22, 0.91, 0.39 , 0.12],
         "full_duration": [17.8, 7.6, 8.0, 7.5, 6.0, 6.8], 
         "iterations": [100, 10, 10,10 ,10 ,10], 
         "errors": [0, 0, 0, 0, 0, 0, 0],
         "sla" : ["OK", "OK","OK","OK","OK","OK" ] 
        }
       ]
   },
   {
     "module_name": "neutron",
     "scenarios": ["create_and_delete_networks", "create_and_delete_ports", "create_and_delete_routers", "create_and_delete_subnets", "create_and_list_networks" , "create_and_list_ports", "create_and_list_routers", "create_and_list_subnets", "create_and_update_networks", "create_and_update_ports", "create_and_update_routers", "create_and_update_subnets"], 
      "results": [
         {
           "load_duration": [4.2, 32.6, 7.0, 7.5, 3.8, 19.4, 13.2, 1.9, 1.3, 4.9, 2.7, 2.1 ],
           "full_duration": [9.4, 44.2, 20.8, 19.9, 12.69, 83.4, 52.04, 22.2, 8.4, 15.4, 14.6, 23.5], 
           "iterations": [100,100,30,100,100,100,100,10,10,10,10,10], 
           "errors": [43, 16, 7, 33, 44, 17, 31, 1, 2, 2, 5, 1], 
           "sla" : ["OK", "OK","OK","OK","OK","OK","OK","OK","OK","OK","OK","OK" ] 
         },
         {
           "load_duration": [4.0, 32.4, 5.2, 7.6, 3.9, 20.3, 13.7, 2.3, 1.2, 4.69, 3.87, 2.71],
           "full_duration": [9.2, 43.7, 14.7, 18.2, 13.7, 85.1 ,55.5, 23.9, 9.1, 14.9, 16.3, 23.5], 
           "iterations": [100,100,30,100,100,100,100,10,10,10,10,10], 
           "errors": [35, 19, 11, 26, 46, 14, 28, 2, 3, 4, 3, 2],
           "sla" : ["OK", "OK","OK","OK","OK","OK","OK","OK","OK","OK","OK","OK" ] 
         },
        ]
      }
      ]
    }

Based on these json files, it shall be possible to draw the graphs:

opnfv_test_dashboard.1440410152.txt.gz · Last modified: 2015/08/24 09:55 by Morgan Richomme