This is an old revision of the document!
Testbed POD3 is available for OPNFV community to test and develop vswitchperf. In general, testing environment consists of IXIA traffic generator and linux machine dedicated for vswitchperf test suite and its dependencies. Testbed can be accessed over OpenVPN connection, after required certificates are assigned and delivered to OPNFV member. There are multiple machines at POD3 setup, but for vswitchperf testing only following two are important:
|Server role||IP||Server OS||Access credentials|
|vsperf sandbox||10.4.1.2||Linux Centos7||user/user (root/P@ssw0rd)|
|IXIA/IxNetwork client||10.4.2.0||Windows 8 (VM)||administrator/P@ssw0rd|
Please check intel_hosting page for details about OpenVPN setup and POD3 testbed details.
[BOM] Mention Michael Wynne as the contact for getting credentials - or is that covered in intel_hosting link?
It is essential, that IxNetwork TCL server is up and running (at port 9111) at IXIA client machine 10.4.2.0. Otherwise vswitchperf won't be able to initiate traffic generation and tests will fail. So before testing itself, following steps must be performed:
# login to the machine with remote desktop, e.g.
rdesktop -u administrator -g 1024x1024 10.4.2.0
# check if IxNetwork is running - if so, there will be icon in system tray « picture with highlighted NW icon in system tray »
# if IxNetwork is running then proceed to vswitchperf section below otherwise run IxNetwork by desktop or launch bar icon « picture with win desktop with NW shortcut highlighted »
IxNetwork GUI has multiple functions. Previously mentioned TCL server is used as interface between linux testbed and IXIA generator. However IxNetwork also provides GUI, which shows test progress, number of involved IXIA ports and data miner application for inspection of test results. These results are stored in folder /temp/ixia/rfctests, which is shared with linux testbed, so vswitchperf can access and process them.
Once IxNetwork TCL server is running, vswithcperf can be used together with IXIA traffic generator. Following steps should be followed to run vsperf test script:
# login to linux testbed, e.g.
# in case, that you'll see a message that machine is booked, that please contact current user and check when machine will be free; Example of "booked" message:
!!! Machine is currently booked by: !!! email@example.com
Note: if there isn't any booked by message, it is still advised to check 'who' and 'ps ax' commands to avoid parallel vsperf invocation # make a note, that machine is booked, i.e. store your email to ~/booked file, e.g.
echo "firstname.lastname@example.org" > ~/booked
# enable python33 environment
scl enable python33 bash
# enable vsperf virtual environment
# enter vswitchperf directory and check, that vsperf works
cd ~/vswitchperf/ ./vsperf --help
# in case, that vsperf works (usage has been shown), you could start with your testing, e.g. following command will run all available vsperf tests:
# at the end of your session remove you booking note:
Procedure for manual termination of vsperf is following: # interrupt vsperf script from console by pressing ctrl+c # terminate all involved processes, e.g. in case of test with OpenVSwitch and QEMU you can do it by
sudo pkill python; sudo pkill ovs-vswitchd;sudo pkill -f qemu;sudo pkill ovsdb-server
Hint: There is ~/bin/vskill script, which runs kill commands listed above, so you could simply run:
In case that re-installation of linux testbed is required. Following steps should be followed.
# Perform minimal installation of Centos7 and create an account for user. This can be achieved either locally by Intel staff or remotely. At the time of this writing, there were two options of remote access to POD3 servers. Please contact Intel staff responsible for POD testbeds for more details. # in case that DHCP doesn't assign correct IP address (i.e. 10.4.1.2), then manual configuration will be needed. Networking details are available at intel_hosting. Network configuration can be set manually in /etc/sysconfig/network-scripts/ifcfg* file related to network card, where backbone is connected. # login as root and update system
ssh email@example.com yum update
# install software collection with python version 3.3
mkdir install cd install wget https://www.softwarecollections.org/en/scls/rhscl/python33/epel-7-x86_64/download/rhscl-python33-epel-7-x86_64.noarch.rpm rpm -i rhscl-python33-epel-7-x86_64.noarch.rpm
# install additional packages needed by vswitchperf, including python 3.3 and its packages
yum install vim wget git scl-utils fuse-libs fuse fuse-devel pciutils python33 python33-python-tkinter glibc.i686 kernel-devel make autoconf libtool automake gcc gcc-c++
# configure hugepages in both grub and system configuration:
perl -i -pe 's/^(GRUB_CMDLINE_LINUX=.*)"$/$1 default_hugepagesz=1G hugepagesz=1G hugepages=16 isolcpus=1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31 selinux=0"/' /etc/default/grub echo 'vm.nr_hugepages=1024' >> /etc/sysctl.conf
# enable user account with sudo privileges ## add user into the wheel group
gpasswd -a user wheel
## use visudo to edit /etc/sudoers and enable nopassword sudo access to the wheel group; Particular line should look like:
%wheel ALL=(ALL) NOPASSWD: ALL
# reboot the machine; this is required by both yum update and hugepages setup # login as user and clone vswitchperf repository
ssh firstname.lastname@example.org git clone https://gerrit.opnfv.org/gerrit/vswitchperf cd ~/vswitchperf/ git pull
# build vsperf dependencies
cd ~/vswitchperf/src make
# check and fix possible issues with DPDK kernel modules ## run
## select option  Insert IGB UIO module ## in case, that module can't be loaded, then select option  x86_64-ivshmem-linuxapp-gcc to rebuild it and after that try  again # create virtual environment for vswitchperf
cd scl enable python33 bash virtualenv vsperfenv cd vsperfenv source bin/activate pip install -r ~/vswitchperf/requirements.txt
# install IXIA clients as it is described in ~/vswitchperf/docs/quickstart.md # customize vsperf configuration to match your system (sample of working configuration is attached)
# enable simple booking message by adding following section at the end of /home/user/.bash_profile
if [ -f ~/booked ] ; then echo echo -e "\e[00;31m!!! Machine is currently booked by: !!!\e[00m""" cat ~/booked echo fi
* NOTES * * OVS master from 2015-07-02 doesn't accept openflows from vsperf; As a workaround, it is possible to downgrade OVS to older version - version v2.3.90 from 2015-06-06 is known to work.works)