User Tools

Site Tools


platform_performance_benchmarking

Project: QTIP-Platform Performance Benchmarking

Project Information

Qtip Meeting Information

  • IRC: #opnfv-qtip
  • Schedule for C Release: TBD

Description:

  • Qtip is a performance benchmark suite for OPNFV platform
  • QTIP aims to benchmark OPNFV platforms through a "Bottom up" approach, testing baremetal components first.
  • Emphassis on platform performance through quantitative benchmarks rather than platform validation

Qtip project is a "Bottom-Up" approach in characterizing and benchmarking OPNFV Platform. Proper characterization of OPNFV platforms is a critical component to understand how well OPNFV (and by implication upstream components) perform in realistic deployment scenarios, to provide feedback to design teams, to provide platform level performance data to higher layer developers, users, and community at large.

Qtip also aims to build necessary testing and benchmarking tools that will be needed to fulfill its goals, to automate when possible, and to provide these tools and automation software to upper layer and other testing projects in OPNFV.

Scope:

The overall problem this project tries to solve is the general characterization of an OPNFV platform. It will focus on general performance questions that are common to the platform itself, or applicable to multiple OPNFV use cases. QTIP will provide the capability to quantify a platform's performance behavior in a standardized, rigorous, and open way, and a well-documented methodology to reproduce the results by anyone interested

Main activities include:

  • Identify components of OPNFV platforms that are important for VNF applications: NFV PER 001
  • These components can be divided into Computing, Networking and Storage characteristics
  • Identify benchmarks to evaluate the performance of these components
  • Develop/ leverage tools to benchmark these components
  • Identify the test case scenarios in which to run these benchmarks
  • Automate the configuration/execution of tests along with the collection of test results
  • Represent test results in a standardized format for performance comparison between varying platform configuration
  • Develop a flexible framework to add testcases

In order to achieve the above objectives, we intend to use both open source, and non-open source commercial tools under guidelines developed by the community.

Such sophisticated testing methodology and harness, once developed, will first be applied to quantify a platform's behavior from lower layer going up (Bottom-up Approach). We believe understanding lower layer behavior is a necessary prerequisite step of understanding more complex upper layer behavior with more interconnecting parts. Examples of such platform level sub-systems may include (for illustration only): CPU, Memory, NICs, Storage, Switching, Hypervisors, containers, Host OS, Guest OS, vSwitch/vForwarding, TCP/IP, base OpenStack, base ODL/SDN and so on.

Tests cases:

Sample Tests:

Computing Benchmarks

Running QTIP benchmarks on different machines configured with different components would evaluate the influence of these components on the computing performance. For example, comparing these benchmarks on baremetal machines with different CPUs would help evaluate the performance of the CPU. Same approach can be used to test memory performance. Comparing the performance of these benchmarks on a baremetal machine vs a VM running on the same baremetal machine would help analyze the overhead of the hypervisor.

Some of the computing benchmarks include:

  • Dhrystone Benchmarks
  • Whetstone Benchmarks
  • Cachememory benchmark
  • Memory bandwidth benchmark
  • OpenSSL speed benchmark

Networking Benchmarks

More details within the coming week.

Storage Benchmarks

These benchmarks can be used to compare storage peformance for disks mounted locally as well on a network. The different storage components within different platforms can be compared using these benchmarks to get quantitative results for storage performance.

  • File I/O benchmark
  • Block Read/Write Benchmark

Dependencies:

Software

  • OPNFV Based Arno Installation (OpenStack)
  • OpenSource benchmark tools e.g unixbench
  • Python 2.7
  • Ansible 1.9.2

Related projects

  • Testbed Infrastructure (Pharos) on which the Platform Performance Benchmarking system will be built. OPNFV Project Proposal: Testbed Infrastructure
  • Bootstrap/GetStarted project that produce OPNFV software builds (or equivalent).Project: Bootstrap/Get started!, Appex
  • Infrastructure verification (YardStick) Project: Yardstick - Infrastructure Verification deals with Infrastructure validation of OPNFV platforms. QTIP aims to benchmark the performance of components within OPNFV platforms for a quantitative analysis and doesn't deal with validation of the platform.
  • Testing methodology has a rich set of industry standards, e.g. ETSI NFV ISG, IETF, and so on. As we develop or adopt testing methodologies, we will reference these standards whenever applicable.

Reference POD

For Brahamputra release, QTIP would be calculating three indices.

  • The Compute Suite Index
  • The Storage Suite Index
  • The Network Suite Index

These indices are calculated by comparing QTIP results with the QTIP reference results obtained at Dell's OPNFV Lab's POD 3

Reference POD hardware details

Reference POD QTIP results

Committers and Contributors:

Committers:

Contributors:

Planned deliverables

  • QTIP benchmarking suite
  • Qtip documentations: mostly on wiki for test info, methodologies, references, user guides
  • One or more operational Qtip system on community testbeds
  • OPNFV script source tools for automation
  • Integration of results within
  • Repository for Test Suite and Results

Proposed Release Schedule:

  • Aligned with OPNFV Release Plan 2.

Meeting Minutes:

Documents:

platform_performance_benchmarking.txt · Last modified: 2016/03/07 16:58 by Vikram Dham