User Tools

Site Tools


meetings:storperf

StorPerf Team Weekly Meeting

Every second Wednesday at 1500 UTC during the winter
(16:00 CET, 10:00 EST, 07:00 PST)
Every second Wednesday at 1400 UTC during NA DST
Chaired by mbeierl (Mark Beierl)
IRC Channel #opnfv-meeting on Freenode
https://global.gotomeeting.com/join/852700725
United States (Toll-free): 1 877 309 2073
United States : +1 (571) 317-3129
Access Code: 852-700-725

Agenda for January 20, 2016 at 1500 UTC

Agenda:

  • Definition of Done
    • What does it mean for StorPerf to be complete?
    • How do we know we are done?
    • Who verified fundtionality?
    • Who verified documentation?
  • Documentation
    • Some things should be moved out of wiki and into .rst files so that they can be built as part of generated documentation
      • API
      • Build and installation
    • Who can help work on this?

Agenda for January 6, 2016 at 1500 UTC

Note: The December 23, 2015 meeting was canceled for Christmas Holidays.

Agenda:

  • Roll Call / Agenda Bashing
  • Updates

Attendees:

  • Mark Beierl (EMC)
  • Edgar StPierre (EMC)
  • Larry Lamers (VMware)
  • Nataraj Goud (TCS)
  • Daniel Smith (Ericsson)

Minutes:

  • Updates
    • Build pipeline for StorPerf includes producing the Docker image and uploading it directly upon successful merge
    • Heat template integration is under way
    • Milestone E report due January 5, will be issued today
    • Welcome to two new contributors from Tata:
      • Srinivas Tadepalli
      • Nataraj Goud
  • Inclusion of StorPerf in Brahmaputra
    • StorPerf is an exhaustive test of disk subsystem therefore is not included in the CI pipeline
    • Instructions to be posted on how to install and run StorPerf with OPNFV after initial deployment is successful
      • Note: Technically these same instructions should work for any OpenStack deployment
    • StorPerf is not part of Yardstick or Functest dashboards. Tests must be run manually
  • C Release
    • A new page will be created for brainstorming ideas on C release planning
    • Will include focus on Object Storage testing
    • This is also the place to discuss potential Yardstick and QTIP integration (see section on Storage Test Suite)
  • Plugfest
    • A page or etherpad will be created for Plugfest ideas
    • What sort of tests can we run?
    • What disk access technologies can be demonstrated?
    • Need to check if proprietary software (ie: ScaleIO) is allowed here

Agenda for December 9, 2015 at 1500 UTC

Note: This is the first meeting since the time changed in parts of the world.

Agenda:

  • Roll Call / Agenda Bashing
  • Updates
  • Open Issues
    • Fuel Integration - Ramp up needed
    • Docker image vs QCOW2 for delivery
    • Lab for Fuel integration testing

Attendees:

  • Mark Beierl (EMC)
  • Edgar StPierre (EMC)
  • Stephen Blinick (Intel)

Minutes:

  • Discussion: What does preconditioning mean to a Cinder volume that is mapped to a larger disk array?
    • Due to not having a full view of the array and that each Cinder volume is merely a slice of the underlying storage, we can only approximate what we will do for preconditioning
    • The writing of zeros will translate to no-ops by most back ends
    • FIO must be instructed to refill its buffers or the same data might get written repeatedly, resulting in dedup or other caching to be invoked by the storage back end
    • Conventional wisdom is: must excercise at least 50% of the unit to get proper testing - probably exercises cache. If it's backed by filesystem (such as Ceph) it behaves very differently
    • Decision is to go wide, with many VMs attached to their own volumes.
    • An example: CBT ceph benchmark uses lots of FIO jobs and 4mb block size writes
  • Discussion: FIO strange output
    • It has been noted that there is periods of 0 IOPS being reported by the JSON output from FIO
    • Apparently the 14.04 distro version of FIO (fio-2.1.3) has JSON output bugs
    • FIO 2.2.11 has fixes submitted for this
  • Discussion of workload and warmup:
    • Do we need:
      • Step function for block size, or preconditioning?
      • To redo preconditioning before each workload?
    • General consensus is:
      • Content has to be random (see note on FIO buffers above)
      • Mix of random writes at 4K block size to fill volume and then sequential writes at 1M block size
      • This should cause "fragmentation of internal tables"
      • Should not need to redo this between workloads as the disks are perturbed enough. Block size and queue depths should all benefit
  • Discussion of API
    • Noted that we need a Cancel API. This was proposed but has not made it into the API page.
    • StorPerf master API needs to be defined. This would be the high level API that is used to spin up the various VMs and use the existing lower level API to run the master test suite.
    • Should probably give better names to workload (like maybe mix instead of rw)
    • Need some reporting clarification in API document:
      • Latency reported is Completion Latency (clat)
      • Change loop to be block size outer, queue depth inner
    • Report:
      • Add 95th, 99th percentiles as options to report
      • MB/s for sequential is also interesting option
      • Submission latency is interesting as a talking point and something we might want to discuss with DPACC as there may be kernel tuning that can improve the performance of submission. We are not showing this in the report at this time
      • Basic report will be read/write IOPS and average latency only
    • Overall need to decide how wide to go:
      • Determine total storage available to Cinder and divide that up with an algorithm? Autodetect might to
      • Probably better to have master API define how many VMs to use and how large each Cinder volume should be
    • Would like to get a programmatic way to show the latency wall: keep going wider on VMs hitting Cinder until latency climbs beyond 30 ms. This would be considered the practical maximum number of IOPS the Cinder back end supports.

Agenda for Friday October 16, 2015 at 1400 UTC

Agenda:

  • Roll Call / Agenda Bashing
  • Updates
    • First commits to git
    • Releng integration
    • Spirent Lab
  • Sample Metrics
    • Grafana
  • Meeting schedule for October
    • No meeting during OpenStack Tokyo (October 30)
    • Meet next week (October 23) or in 3 weeks (November 6)

Attendees:

  • Mark Beierl
  • QiLiang
  • Edgar StPierre
  • Iben Rodriguez

Minutes:

http://ircbot.wl.linuxfoundation.org/meetings/opnfv-meeting/2015/opnfv-meeting.2015-10-16-14.00.html

Actions:

  • Mark to remove test call hook as part of git review
  • Edgar to look into time with QTIP and Yardstick about integration at OPNFV Summit
  • Iben to update Spirent Lab (Pharos) Wiki
  • Iben to provide more information on how to access Spirent lab and get a full virtual cloud stack set up

Next meeting is in 3 weeks: November 6, 2015

Agenda of First Meeting: Friday October 2, 2015 at 1400 UTC

Agenda:

Attendees:

  • Mark Beierl (EMC)
  • Edgar StPierre (EMC)
  • Al Morton (AT&T)
  • Stephen Blinick (Intel)
  • Jose Lausuch (Ericsson)
  • Vikram Dham (Dell)
  • Malathi Malla (Spirent)

Minutes:

http://ircbot.wl.linuxfoundation.org/meetings/opnfv-meeting/2015/opnfv-meeting.2015-10-02-14.00.html

meetings/storperf.txt · Last modified: 2016/01/20 14:32 by Mark Beierl