User Tools

Site Tools


pharos:spirentvctlab

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
pharos:spirentvctlab [2015/08/21 16:53]
Iben Rodriguez https://jenkins.vctlab.com:9443/
pharos:spirentvctlab [2016/01/26 18:34] (current)
Iben Rodriguez
Line 1: Line 1:
 ====== Spirent Virtual Cloud Test Lab ====== ====== Spirent Virtual Cloud Test Lab ======
  
-A community provided metal resource hosted at Nephoscale, leveraged for SDN/NFV public testing and OpenDaylight,​ OpenStackOPNFV projects.+A community provided metal resource hosted at Nephoscale, leveraged for SDN/NFV public testing and OpenDaylight,​ OpenStack ​and OPNFV projects.
  
   * **Lab name:** Spirent VCT Lab   * **Lab name:** Spirent VCT Lab
   * **Lab location:** USA West and East availability zones   * **Lab location:** USA West and East availability zones
-  * **Who to contact:​** ​dl-vct@spirent.com (Iben Rodriguez) +  * **Who to contact:​** ​iben.rodriguez@vmsec.com (Iben Rodriguez) ​- Use Slack for discussion: https://​opnfv.slack.com/​ 
-  * **Any main focus areas:** Network Testing and Measurement - SDN and NFV Technology+  * **Any main focus areas:** Network Testing and Measurement - IPv6, Storage, OpenDaylight,​ OpenContrail,​ ONOS, MaaS, SaltStack, ​SDN and NFV Technology
   * **Any other details that should be shared to help increase involvement:​** This lab is used to develop the low level test scripts used to identify functional and performance limits of network devices. ​   * **Any other details that should be shared to help increase involvement:​** This lab is used to develop the low level test scripts used to identify functional and performance limits of network devices. ​
 +  * Support of the VCT Lab is provided by https://​cloudbase.it/​
  
-We have various hardware and software in the lab linked together with 40 Gigabit Ethernet. There are mostly whitebox x86 servers with ARM cpus in the pipeline.+We have various hardware and software in the lab linked together with 40 Gigabit Ethernet. There are mostly whitebox x86 servers with ARM CPUs in the pipeline.
  
-As much as possible all activities should be scripted via jenkins ​jobs using: https://​jenkins.vctlab.com:​9443/​+IPv6 and IPv4 are natively supported from our service provider: [[http://​www.nephoscale.com|Nephoscale]] 
 +We have a /48 and a /64 IPv6 as well as multiple /24 and smaller IPv4 publicly routable IPv6 subnets. 
 + 
 +As much as possible all activities should be scripted via **Jenkins** ​jobs using: https://​jenkins.vctlab.com:​9443/​ 
 + 
 +New nodes are created and managed with **Mist.IO** to support the following functions:​ 
 +  * SSH Key Management 
 +  * New system setup and provisioning - ability to deploy 10s and 100s machines at a time with manually or via a script 
 +  * Role Based Access 
 +  * Monitoring and Alerting for Windows, VMware, Linux, KVM, OpenStack and other platforms 
 +  * Run scripts and access shell console for remote administration 
 +  * IP Address Management 
 +  * Web UI, Command Line (CLI), and Rest API available 
 + 
 +**Spirent VCT Lab** is currently working on 3 different **OpenStack** environments each one of them deployed on different **physical** hardware configuration:​ 
 +  * **OpenStack Juno** (Ubuntu 14.04.1 LTS, 10 cores, 64GB RAM, 1TB SATA, 40 Gbps) - snsj53 <-- used for 40 gbps SR-IOV performance testing 
 +  * **OpenStack Kilo** (CentOS 7.1, 10 cores, 64GB RAM, 1TB SATA, 40 Gbps) - snsj54 <-- shared between OPNFV and 40 gbps SR-IOV performance testing 
 +  * **VMware ESXi 6.0** (CentOS 6.5, 10 Cores, 64GB RAM, 1TB SATA, 10 Gbps) - snsj67 
 +  * **OpenStack Icehouse – 2014.1.3 release** (CentOS 6.5, 20 Cores, 64GB RAM, 1TB SATA, 10 Gbps) - snsj69 
 +  * **Microsoft Windows 2012 R2 Hyper-V** (10 Cores, 64GB RAM, 1TB SATA, 10 Gbps) - snsj76 
 +  * **VMware vSphere 6 VSAN** Cluster with lots of RAM, CPU, DISK, and 40g with VLANs for virtual machines.
  
-**Spirent VCT Lab** is currently working on 4 different **OpenStack** environments each one of them deployed on different hardware configuration:​ 
-  * **OpenStack Icehouse – 2014.1.3 release** (CentOS 6.5, 20 Cores, 64 GB RAM, 1 TB Sata, 10 Gbps) - snsj69 
-  * **OpenStack Icehouse – 2014.1.2 release** (CentOS 6.6, 10 Cores, 64 GB RAM, 1 TB Sata, 1 Gbps) - snsj67 
-  * **OpenStack Juno – 2014.2.2 release** (CentOS 7.1, 10 cores, 64 GB RAM, 1 TB Sata, 10 Gbps) - snsj54 
-  * **OpenStack Juno – 2014.2.1 release** (Ubuntu 14.04.1 LTS, 10 cores, 64 GB RAM, 1 TB Sata, 1 Gbps) - snsj53 
  
 ---- ----
 +Here are the openstack installers we support:
 +  * https://​github.com/​cloudbase/​openstack-puppet-samples
 +  * https://​github.com/​cloudbase/​salt-openstack
 +  * RDO
 +  * Canonical MaaS - https://​wiki.opnfv.org/​pharos/​maas_getting_started_guide
  
 There are a number of different networks referenced in the VPTC Design Blueprint. ​ There are a number of different networks referenced in the VPTC Design Blueprint. ​
Line 33: Line 54:
   * Together these offer a flexible solution to allow up to 8 simultaneous tests to take place with physical traffic generators at the same time.  ​   * Together these offer a flexible solution to allow up to 8 simultaneous tests to take place with physical traffic generators at the same time.  ​
  
-Assuming a 10 to 1 oversubscription ​ratio we could handle 80 customers with the current environment. ​+Assuming a 10 to 1 over-subscription ​ratio we could handle 80 customers with the current environment. ​
  
 For example: ​ For example: ​
pharos/spirentvctlab.1440175982.txt.gz · Last modified: 2015/08/21 16:53 by Iben Rodriguez