This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
opnfv_test_dashboard [2015/10/09 15:32] Morgan Richomme |
opnfv_test_dashboard [2016/01/19 14:33] (current) Peter Bandzi [Test results server] |
||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | ====== Note working page ...no decision yet...to be discussed during functest and test weekly meeting ====== | + | ====== Testing Dashboard ====== |
| - | ===Admin === | + | ==== Admin ==== |
| It was decided to adopt the Mongo DB/API/Json approach [[http://ircbot.wl.linuxfoundation.org/meetings/opnfv-testperf/2015/opnfv-testperf.2015-09-03-16.03.html|dashboard meeting minutes]] | It was decided to adopt the Mongo DB/API/Json approach [[http://ircbot.wl.linuxfoundation.org/meetings/opnfv-testperf/2015/opnfv-testperf.2015-09-03-16.03.html|dashboard meeting minutes]] | ||
| Line 12: | Line 12: | ||
| - | === Introduction === | + | ==== Introduction ==== |
| All the test projects generate results in different format. The goal of a testing dashboard is to provide a consistent view of the different tests from the different projects. | All the test projects generate results in different format. The goal of a testing dashboard is to provide a consistent view of the different tests from the different projects. | ||
| - | === Overview === | + | ==== Overview ==== |
| Line 24: | Line 24: | ||
| We may distinguish | We may distinguish | ||
| - | * the data collection | + | * the data collection: Test projects push their data using a REST API |
| - | * the production of the test dashboard | + | * the production of the test dashboard: the LF WP develop a portal that will call the API to retrieve the data and build the dashboard |
| == Test collection == | == Test collection == | ||
| Line 43: | Line 43: | ||
| * All the testcases in critical state for version X installed with installer I on testbed T | * All the testcases in critical state for version X installed with installer I on testbed T | ||
| * .... | * .... | ||
| - | * | ||
| It shall be possible to filter per: | It shall be possible to filter per: | ||
| Line 62: | Line 61: | ||
| And also the severity (to be commonly agreed) of the errors... | And also the severity (to be commonly agreed) of the errors... | ||
| - | ^ ^description ^comment ^ | + | ^description ^comment ^ |
| - | | critical | | testcase failed on 80% of the installers? | | + | | critical | not acceptable for release | |
| - | | major | | | | + | | major | failure rate? , failed on N% of the installers?| |
| - | | minor | | | | + | | minor | | |
| - | The expected load will not be a real constraint. In fact the tests will be run less that 5 times a day and we do not need real time (upgrade evey hours would be enough). | + | The expected load will not be a real constraint. |
| + | |||
| + | In fact the tests will be run less that 5 times a day and we do not need real time (upgrade evey hours would be enough). | ||
| Line 76: | Line 77: | ||
| * declare the testcases | * declare the testcases | ||
| * push the results into the test DB | * push the results into the test DB | ||
| - | * Describe their needs to LF web team, explain which values shall be used and which graphs shall be built based on the test_results pushed into the DB. The post-processing of the data will be realized by the LF web team. It will be devlopped in PHP and be optimized. | + | * Create a <my_project>2Dashboard.py file into [[https://git.opnfv.org/cgit/releng/tree/utils/test/result_collection_api/dashboard|releng]], This file indicate how to produce "ready to dashboard" data set, that are exposed afterwards through the API |
| + | ^ Project ^ testcases ^ Dashboard ready description ^ | ||
| + | | Functest | vPing | graph1: duration = f(time) \\ graph 2 bar graph (tests run / tests OK) | | ||
| + | | ::: | Tempest | graph1: duration = f(time) \\ graph 2 (nb tests run, nb tests failed)=f(time) \\ graph 3: bar graph nb tests run, nb failed | | ||
| + | | ::: | odl | | | ||
| + | | ::: | rally-* | | | ||
| + | | yardstick | Ping | graph1: duration = f(time) \\ graph 2 bar graph (tests run / tests OK) | | ||
| + | | ::: | | | | ||
| + | | VSPERF | | | | ||
| + | | QTip | | | | ||
| - | === Solutions for dashboarding === | ||
| - | * home made solution (see below) | + | |
| + | |||
| + | ==== First studies for dashboarding ==== | ||
| + | |||
| + | * home made solution | ||
| * [[http://bitergia.com/| bitergia]], as used for code in OPNFV [[http://projects.bitergia.com/opnfv/browser/]] | * [[http://bitergia.com/| bitergia]], as used for code in OPNFV [[http://projects.bitergia.com/opnfv/browser/]] | ||
| * [[https://wiki.jenkins-ci.org/display/JENKINS/Dashboard+View | Jenkins Dashboard view plugin]] | * [[https://wiki.jenkins-ci.org/display/JENKINS/Dashboard+View | Jenkins Dashboard view plugin]] | ||
| Line 89: | Line 102: | ||
| * [[http://butleranalytics.com/5-free-open-source-bi/|BI solution]] | * [[http://butleranalytics.com/5-free-open-source-bi/|BI solution]] | ||
| - | === Example of home made solution on functest/vPing === | + | ==== Visualization examples ==== |
| + | |||
| + | * Example of home made solution on functest/vPing: [[vPing4Dashboard example]] | ||
| + | * Example view of using the ELK stack (elasticsearch, logstash, kibana): | ||
| + | * [[opnfv_test_dashboard/opnfv_kibana_dashboards]] | ||
| + | * Visualize Functest (vPing/Tempest) results: [[opnfv_test_dashboard/functest_elk_example|ELK example for FuncTest]] | ||
| + | |||
| + | ===== Test results server ===== | ||
| + | |||
| + | Test results server: | ||
| + | |||
| + | * Single server which hosts and visualizes all OPNFV test results | ||
| + | * Testresults.opnfv.org (also testdashboard.opnfv.org) - 130.211.154.108 | ||
| + | * The server will host | ||
| + | * Test results portal / landing page (nginx) | ||
| + | * Test results data base (MongoDB) | ||
| + | * Yardstick specific data base (InfluxDB) | ||
| + | * ELK stack - with Kibana to serve as Test Dashboard | ||
| + | * Grafana (for Yardstick results visualization) | ||
| + | * (future) - use Kafka as message broker and hook up data-bases (ES, Mongo, ..) to Kafka | ||
| + | |||
| + | Port assignment (for FW): | ||
| + | * 80 - nginx - landingpage | ||
| + | |||
| + | Port assignment (local) | ||
| + | * 5000 - logstash | ||
| + | * 5601 - Kibana | ||
| + | * 8083, 8086 - InfluxDB | ||
| + | * 8082, tornado | ||
| + | * 3000 - Grafana | ||
| + | * 9200-9300 - Elasticsearch APIs | ||
| - | see [[vPing4Dashboard example]] | ||