User Tools

Site Tools


storperf

Project: StorPerf - Storage Performance Benchmarking for NFVI

Description

The purpose of StorPerf is to provide a tool to measure block and object storage performance in an NFVI. When complemented with a characterization of typical VF storage performance requirements, it can provide pass/fail thresholds for test, staging, and production NFVI environments.

The benchmarks developed for block and object storage will be sufficiently varied to provide a good preview of expected storage performance behavior for any type of VNF workload. The elements of the project include:

  • Test Case definition
  • Metrics definition
  • Test Process definition
  • Tool development

Some of these are expanded further below.

Project References

Key Project Facts

This URL is not allowed for scraping

StorPerf Project Scope

StorPerf testing addresses both block storage and object stores, though using different test suites. There is limited value in testing locally attached storage, so this is primarily about testing distributed/external storage environments.

StorPerf is intended to run standalone test benchmark tools, plus provide integration with test frameworks such as Qtip and Yardstick.

Use Cases

There are three applicable use cases for these storage performance benchmarks:

  1. An OPNFV test lab manager wants to characterize expected storage behavior in a test NFVI deployment. This will include both a preconditioning phase for each storage environment as well as the broadest set of test cases across all identified storage services. This will provide VNF test applications with information about expected storage performance. This will integrate with existing test lab tool chains.
  2. A Service Provider wants to validate storage performance in an NFVI staging environment prior to production deployment. This will validate expected performance expectations using pass/fail conditions using the same preconditioning and test cases as for a test lab. This will integrate with project Bootstrap.
  3. A Service Provider wants to isolate performance problems in a production NFVI environment. This will use a much narrower set of test cases to minimize impact on the production environment. This will utilize a manual deployment and control of the test VMs.

Timeline

The high level plan for StorPerf is to deliver (minimally) test requirements and test process specifications in the Brahmaputra release timeframe. Block performance testing will lead object testing, and could also be delivered in Brahmaputra, though any such delivery would be asynchronous to, and largely independent of, the Brahmaputra release mechanism. In the C release, we will complete object store testing and integration with Qtip and Yardstick.

Project Planning: TBD

Test Cases

This is an outline of test cases. A specification will be written capturing actual tests and steps. And of course, the input to the test process will be determined by community participation.

Block Storage

Assuming iSCSI-attached storage, though local direct attached storage, or Fibre Channel-attached storage could also be tested.

  1. Preconditioning of defined Logical Block Address range (period TBD)
  2. Testing across each combination of: Queue Depths (1, 16, 128) and Block sizes (4KB, 64KB, 1MB)
  3. For each of 5 workloads: Four corners (100% Read/Seq, Write/Seq, Read/Random, Write/Random) and mixed (70% Read/Random).

Object Storage

Assuming an HTTP-based API, such as Swift for accessing object storage.

  1. Determine max concurrency of SUT with smaller data size (GET/PUT) tests by determining performance plateau
  2. Determine max TPS of SUT using variable block size payloads (1KB, 10KB, 100KB, 1MB, 10MB, 100MB, 200MB)
  3. Use 5 different GET/PUT workloads for each: 100/0, 90/10, 50/50, 10/90, 0/100
  4. Perform separate metadata concurrency test for SUT using List and Head operations

Especially looking for workload recommendations for testing in this area.

Metrics

Initially, metrics will be for reporting only and there will not by any pass/fail criteria. In a future iteration, we may add pass/fail criteria for use cases which are testing viability for known workload requirements.

Block Storage Metrics

The mainstays for measuring performance in block storage are fairly well established in the storage community, with the minimum being IOPS and Latency. These will be produced in report/tabular format capturing each test combination for:

  1. IOPS at a fixed max latency (TBD; we could also choose to report IOPS when the test hits the latency "wall"). Note that throughput data can be calculated based on IOPS * block size.
  2. Avg Latency for each workload at different IOPS levels

Object Storage Metrics

Object storage delivers different storage characteristics than block storage, and so the metrics used to charaterize it vary to some degree:

  1. Transactions per second (throughput can also be calculated from TPS * object size)
  2. Error rate
  3. Per-test average latency

See also future extensions below.

Future Project Extensions

These are 2nd+ release ideas for extending StorPerf.

  1. Definition of more extensive metrics to measure performance (e.g., I/O Latency variation for object streaming); some of these may require contributions to upstream open source test tools
  2. Time-to-first-write for newly provisioned block volumes. This is intended to measure the impact of zero-out functions performed by storage systems when a volume is provisioned.
  3. Full integration with Qtip and Jenkins for automated deployment and reporting
  4. Create separate deliverable (document) to capture typical/expected VF storage performance requirements using the same metrics, for those VFs that require block or object storage I/O. This can be used to define pass/fail criteria for test lab deployments.

Contributors

Committers

storperf.txt · Last modified: 2016/01/20 14:29 by Mark Beierl