User Tools

Site Tools


get_started:pod_3_-_characterize_vswitch_performance

***NOTICE***

POD 3 is now used for vsperf CI support, it runs the daily jobs and commit gate validation. As such IT IS NOT RECOMMENDED TO USE POD 3 AS A SANDBOX AT THIS TIME. Efforts are being undertaken to setup another sandbox environment for VSPERF. Feel free to follow the example connections and configurations shown below in your own environment.

Overview

Testbed POD3 is available for OPNFV community to test and develop vswitchperf. In general, the testing environment consists of an IXIA traffic generator and linux machine (referred to as DUT) dedicated for vswitchperf test suite and its dependencies.

The testbed can be accessed over OpenVPN connection, after required certificates are assigned and delivered to interested OPNFV member. See below for contact details.

There are multiple machines at POD3 setup, but for vswitchperf testing the following two are important:

Server role IP Server OS Access credentials Access method
DUT/testbed 10.4.2.1 Linux Centos7 user/user ssh, e.g.
ssh user@10.4.2.1
Traffic Gen Windows Client 10.4.2.0 Windows 8 (VM) administrator/P@ssw0rd rdesktop, e.g.
rdesktop -u administrator -g 1024x1024 10.4.2.0
Traffic Gen Server/testbed2 10.4.2.4 Linux Centos7 non-root account with sudo privileges. Username: opnfv Password: octopus ssh, e.g.
ssh opnfv@10.4.2.4

Please check intel_hosting page for details about OpenVPN setup and POD3 testbed details. In case you need credentials for OpenVPN access to POD3, contact Jack Morgan <Jack Morgan@intel.com>

POD 3 Booking Calendar

To use POD 3 please reserve a slot in the calendar @ https://wiki.opnfv.org/wiki/pod3_booking_calendar

POD3 Network Info

NIC connections

Testbed2 NIC (Traffic Gen Server) –> connected to –> Testbed NIC (DUT)

ens513f0: ether 00:1e:67:e2:67:e0 –> connected to –> eno1: ether 90:e2:ba:4a:7f:b0

ens513f1: ether 00:1e:67:e2:67:e1 –> connected to –> ens2f1: ether 90:e2:ba:4a:7f:b1

Usage

Ixia on Traffic Gen Windows Client

It is essential, that IxNetwork TCL server is up and running (at port 9111) at Traffic Gen Windows Client machine 10.4.2.0. Otherwise vswitchperf won't be able to initiate traffic generation and tests will fail. So before testing itself, following steps must be performed:

  1. login to the machine with remote desktop, e.g.
    rdesktop -u administrator -g 1024x1024 10.4.2.0
  2. check if IxNetwork is running - if so, there will be an Nw icon in the system tray
  3. if IxNetwork is running then proceed to DUT section below otherwise run IxNetwork by Nw icon at desktop or launch bar

IxNetwork has multiple functions. Previously mentioned TCL server is used as interface between DUT and IXIA generator. However IxNetwork also provides GUI, which shows progress of executed test, number of involved IXIA ports, etc. There is also a data miner application for inspection of test results. These results are stored in folder /temp/ixia/rfctests, which is shared with DUT over samba protocol, so vswitchperf can access and process them.

DUT

It is a Linux server where vswitchperf, vSwitch and NVF are installed. Testing itself is driven by vsperf script, which will configure chosen packet generator, vSwitch and NVF implementations, performs a test and finally collects test results.

vswitchperf usage

Once IxNetwork TCL server is running, vswithcperf can be used together with IXIA traffic generator. Following steps should be followed to run vsperf test script:

  1. login to linux testbed, e.g.
    ssh user@10.4.1.2
  2. in case, that you'll see a message that machine is booked, then please contact current user and check when machine will be free; Example of "booked" message:
      !!! Machine is currently booked by: !!!
      john.smith@opnfv.org

    Note: if there isn't any booked by message, it is still advised to check who and ps ax commands to avoid parallel vsperf invocation

  3. make a note, that machine is booked, i.e. store your email to ~/booked file, e.g.
    echo "john.smith@opnfv.org" > ~/booked
  4. enable python33 environment
      scl enable python33 bash
  5. enable vsperf virtual environment
      source ~/vsperfenv/bin/activate
  6. enter vswitchperf directory and check, that vsperf works
      cd ~/vswitchperf/
      ./vsperf --help
  7. in case, that vsperf works (usage has been shown), you could start with your testing, e.g. following command will run all available vsperf tests:
      ./vsperf
  8. at the end of your session remove you booking note:
      rm ~/booked

vsperf manual termination

Procedure for manual termination of vsperf is following:

  1. interrupt vsperf script from console by pressing ctrl+c
  2. terminate all involved processes, e.g. in case of test with OpenVSwitch and QEMU you can do it by
      sudo pkill python; sudo pkill ovs-vswitchd;sudo pkill -f qemu;sudo pkill ovsdb-server

    Hint: There is ~/bin/vskill script, which runs kill commands listed above, so you could simply run:

      vskill

Running tests

Start by carrying out the steps specified in "vswitchperf usage".

Phy2Phy

Customize vsperf configuration to match your system:

vim ./conf/10_custom.conf

Example of working POD3 configuration:

    # Copyright 2015 Intel Corporation.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #   http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    # traffic generator to use in tests
    RTE_SDK = '/home/user/vswitchperf/src/dpdk/dpdk'    # full path to DPDK src dir
    OVS_DIR = '/home/user/vswitchperf/src/ovs/ovs'    # full path to Open vSwitch src dir
    RTE_TARGET = 'x86_64-ivshmem-linuxapp-gcc' # the relevant DPDK build target
    
    # traffic generator to use in tests
    #TRAFFICGEN = 'Dummy'
    TRAFFICGEN = 'IxNet'
    #TRAFFICGEN = 'Ixia'
    
    # Ixia/IxNet configuration
    TRAFFICGEN_IXIA_CARD = '1'
    TRAFFICGEN_IXIA_PORT1 = '2'
    TRAFFICGEN_IXIA_PORT2 = '1'
    TRAFFICGEN_IXIA_LIB_PATH = '/opt/ixos/lib/ixTcl1.0'
    TRAFFICGEN_IXNET_LIB_PATH = '/opt/ixia/lib/IxTclNetwork'
    
    # Ixia traffic generator
    TRAFFICGEN_IXIA_HOST = '10.4.1.3'      # quad dotted ip address
    
    # host where IxNetwork GUI/daemon runs
    TRAFFICGEN_IXNET_MACHINE = '10.4.2.0'  # quad dotted ip address
    TRAFFICGEN_IXNET_PORT = '8009'
    TRAFFICGEN_IXNET_USER = 'administrator'
    
    WHITELIST_NICS = ['04:00.0', '04:00.1' ]
    
    # paths to shared directory for IXIA_HOST and DUT (localhost)
    TRAFFICGEN_IXNET_TESTER_RESULT_DIR = 'c:/temp/ixia'
    TRAFFICGEN_IXNET_DUT_RESULT_DIR = '/mnt/ixia'
    
    VSWITCH_VANILLA_PHY_PORT_NAMES = ['enp5s0f0', 'enp5s0f1']
    VANILLA_NIC1_NAME = ['eth1', 'eth3']
    VANILLA_NIC2_NAME = ['eth2', 'eth4']
    
    TEST_PARAMS = {'packet_sizes':'64'}

Run the test:

./vsperf -t phy2phy_tput

PVP

In addition to the P2P configuration above, customize vsperf configuration to match your system:

vim ./conf/04_vnf.conf

Example of working POD3 configuration:

# Copyright 2015 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# ############################
# VNF configuration
# ############################
VNF_DIR = 'vnfs/'
VNF = 'QemuDpdkVhost'

# ############################
# Guest configuration
# ############################

# directory which is shared to QEMU guests. Useful for exchanging files
# between host and guest
GUEST_SHARE_DIR = '/tmp/qemu_share'

# location of guest disk image
GUEST_IMAGE = '/home/user/ovdk_guest_release.qcow2'

# username for guest image
GUEST_USERNAME = 'root'

# password for guest image
GUEST_PASSWORD = 'root'

# login username prompt for guest image
GUEST_PROMPT_LOGIN = 'ovdk_guest login:'

# login password prompt for guest image
GUEST_PROMPT_PASSWORD = 'Password:'

# standard prompt for guest image
GUEST_PROMPT = 'root@ovdk_guest .*]#'

# log file for qemu
LOG_FILE_QEMU = 'qemu.log'

# log file for all commands executed on guest(s)
# multiple guests will result in log files with the guest number appended
LOG_FILE_GUEST_CMDS = 'guest-cmds.log'

# ############################
# Executables
# ############################
QEMU_BIN = '/home/user/vswitchperf/src/qemu/qemu/x86_64-softmmu/qemu-system-x86_64'

OVS_VAR_DIR = '/usr/local/var/run/openvswitch/'

GUEST_NET1_MAC = '00:00:00:00:00:01'
GUEST_NET2_MAC = '00:00:00:00:00:02'

GUEST_NET1_PCI_ADDRESS = '00:04.0'
GUEST_NET2_PCI_ADDRESS = '00:05.0'

Run the test:

./vsperf -t pvp_tput
get_started/pod_3_-_characterize_vswitch_performance.txt · Last modified: 2016/01/18 11:13 by Maryam Tahhan