This is an old revision of the document!
Pharos spec defines the OPNFV test environment (in which OPNFV platform can be deployed and tested …
Virtualized environments will be useful but do not provide a fully featured deployment/test capability
Rls 1 specification is modeled from Bootstrap/Get-started project requirements (BGS) ... * First draft of environment for BGS https://wiki.opnfv.org/get_started/get_started_work_environment * Fuel environment https://wiki.opnfv.org/get_started/networkingblueprint * Foreman environment https://wiki.opnfv.org/get_started_experiment1#topology
CPU: * Intel Xeon E5-2600 (IvyBridge at least, or similar)
Local Storage: * Disks: 4 x 500G-2T + 1 x 300GB SSD (leave some room for experiments) * First 2 disks should be combined to form a 1 TB virtual store for the OS/Software etc * Remaining should be combined to form a virtual disk for CEPH storage * The 5'th disk (SSD) for distributed storage (CEPH) journal towards SSD technology. * Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage * Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)
Memory: * 32G RAM Minimum
Power Supply Single * Single power supply acceptable (redundant power not required/nice to have)
Pre-provisioning Jump Server * OS - CentOS7 * KVM / Qemu * Installer (Foreman, Fuel, ...) in a VM * Collabration Tools * VNC * ?
Test Tools * Tests invoked from jump server * Jenkins is responsible for pre and post action * Objective is for test suites to be independent of installer ... however test-cases could describe different NW configs … for Rls 1 aim is one NW configuration (based on BGS) * Test Tools are specified by Functest * Rally/Tempest and Robot scenario tests are automatically triggered by CI/Jenkins
Controller nodes - bare metal Compute nodes - bare metal
Firewall rules
Lights-out Management:
Test-bed network * 24 or 48 Port TOR Switch * NICS - 1GE, 10GE - per server can be on-board or PCI-e * Connectivity for each data/control network is through a separate NIC. This simplifies Switch Management however requires more NICs on the server and also more switch ports * Lights-out network can share with Admin/Management
Network Interfaces * Option 1: 4x1G Control, 2x40G Data, 48 Port Switch * 1 x 1G for ILMI (Lights out Management ) * 1 x 1G for Admin/PXE boot * 1 x 1G for control Plane connectivity * 1 x 1G for storage * 2 x 40G (or 10G) for data network (redundancy, NIC bonding, High bandwidth testing) * Option II: 1x1G Control, 2x 40G (or 10G) Data, 24 Port Switch * Connectivity to networks is through VLANs on the Control NIC. Data NIC used for VNF traffic and storage traffic segmented through VLANs * Option III: 2x1G Control, 2x10G Data, 2x40G Storage, 24 Port Switch * Data NIC used for VNF traffic, storage NIC used for control plane and Storage segmented through VLANs (separate host traffic from VNF) * 1 x 1G for IPMI * 1 x 1G for Admin/PXE boot * 2 x 10G for control plane connectivity/Storage * 2 x 40G (or 10G) for data network
Files for documenting lab network layout. These were contributed as Visio VSDX format compressed as a ZIP file. Here is a sample of what the visio looks like.
Download the visio zip file here: opnfv-example-lab-diagram.vsdx.zip
FYI: Here is what the OpenDaylight lab wiki pages look like.