User Tools

Site Tools


get_started:intel_hosting

This is an old revision of the document!


Intel is providing a hosted test-bed with a number of isolated bare-metal environments allocated to different OPNFV projects

Currently there are 6 "PODs" (a Genesis POD has 6 servers) operational and short term plan is stand-up several more PODs. The first 2 PODs are being used by BGS project (which is driving compute, network and storage requirements). For support reasons all PODs will have identical network configurations however exceptions can be made if a project has a specific need.

Current POD assignments … get_started:intel_pods

POD Network Support # of servers Usage Type Project Current Users Status Contact
POD 1 non-VLAN 6 Bare Metal Deploy Genesis Genesis Team In Use Tim Rozet
POD 2 non-VLAN 6 Bare Metal Deploy Functest Functest Team In Use Morgan Richomme
POD 3 N/A 3 Other VSPERF VSPERF Team In Use Maryam Tahhan
POD 4 VLAN 6 CI build/deploy OPNFV CI CI Team In Use Fatih Degirmenci
POD 5 VLAN 5 Bare Metal Deploy JOID (also MAAS PoC) JOID Team In Use Narinder Gupta
POD 6 non-VLAN 5 Bare Metal Deploy MAAS PoC MAAS PoC Team In Use Narinder Gupta

Remote access uses OpenVPN via a 100 Mbps (symmetrical) internet link … VPN Quickstart instructions are here:opnfv_intel_hf_testbed_-_quickstart_vpn_.docx

All servers are PXE boot enabled and can also be accessed "out-of-band" using the lights-out network with RMM/BMC. Servers are current or recent generation Xeon-DP … specs for each POD/server to be documented here.

BGS environment details are here: https://wiki.opnfv.org/get_started/get_started_work_environment. The basic compute setup is as follows:

  • 3 x Control nodes (for HA/clustered setup of OpenStack and OpenDaylight)
  • 2 x Compute node (to bring up/run VNFs)
  • 1 x Jump Server/Landing Server in which the installer runs in a VM

Each POD (environment) can support up to 5 VLANs (these subnets are pre-defined in the test-bed network). The default BGS environment is configured with 4 networks 1) Public 2) Private 3) Admin 4) Lights-out. The Admin and Lights-out network share a VLAN (hence each server only uses 3 NICs).

See this xls for specific POD details including network addresses, MAC addresses, IPMI logins, pre-installed components, etc. opnfv_intel_hf_testbed_-_configuration.xlsx (out of date, updated details a work in progress under link below)

Intel POD Specifications

Collaborating

  • Best practices for collaborating in the test environment are being developed and will be documented here
  • The landing server is provisioned with CentOS 7 or Ubuntu 14.04 LTS. The project decides configurations for other servers and provisions accordingly
  • The landing server has a VNC server for shared VNC sessions
  • While VNC allows screens to be shared an audio session may be useful … setup an audio-bridge or try a collaboration tool such as Lync, WebEx, Google Hangouts, Blue Jeans, etc.

POD HowTo Documents

Standard Processes

The Typical POD

Figure 1: Typical Intel OPNFV POD Network Topology

TAGS: BGS, Bootstrap Getting Started, Intel,Test, Lab

get_started/intel_hosting.1451501906.txt.gz · Last modified: 2015/12/30 18:58 by Jack Morgan