Work environment for Bootstrap/Get-started
Note: The below is a first draft cut at a work environment for Bootstrap/Get-started (BGS).
BGS is to deploy to both bare metal as well as to a virtual environment. This requires a physical server environment for BGS. 
Possible hardware scenario's for OPNFV
 
Development Environment Layout
Assumption is that a total of 5 PODs is required:
BGS targets a deployment with HA/cluster support. As a result, the following POD configuration is assumed:
 3 x Control node (for HA/clustered setup of OpenStack and OpenDaylight)
 
 2 x Compute node (to bring up/run VNFs)
 
 1 x Jump Server/Landing Server in which the installer runs in a VM (FUEL)
 
Total: A total of 30 servers (5 PODs with 6 Servers each) is required.
 
Server configuration
Typical server configuration (same server for all components of the POD assumed for reasons of simplicity)
Server:
 CPU: Intel Xeon E5-2600 (IvyBridge at least, or similar)
 
 Disk: 4 x 500G-2T + 1 x 300GB SSD (leave some room for experiments) 
 First 2 disks should be combined to form a 1 TB virtual store for the 
OS/Software etc
 
 
 Remaining should be combined to form a virtual disk for CEPH storage.
 
 The 5'th disk (SSD) for distributed storage (CEPH) journal towards SSD technology.
 
 Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage
 
 
 Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)
 
 Access to console ports/lights-out-management through management tool and/or serial console server
 
 Lights-out-management/Out-of-band management for power on/off/reset
 
 Memory: >= 32G RAM (Minimum)
 
 Single Power supply active with spares in the Lab for power supply failure addressing.
 
 I/O
 Option 1: 4x1G Control, 2x40G Data, 48 Port Switch
 Connectivity to each network is through a  separate NIC that simplifies Switch Management. However, requires a more NICs on the server and also more switch ports.
 1 x 1G for ILMI  (Lights out Management )
 
 1 x 1G for Admin/PXE boot
 
 1 x 1G for control Plane connectivity
 
 1 x 1G for storage
 
 "NICs" can be internal in case a blade server is used
 
 
 2 x 40G (or 10G) for data network (redundancy, NIC bonding, High bandwidth testing)
 
 
 Option II: 1x1G Control, 2x 40G (or 10G) Data, 24 Port Switch  
 
 Option III: 2x1G Control, 2x10G Data, 2x40G Storage, 24 Port Switch  
 
 
 Power: Single power supply acceptable (redundant power not required/nice to have)
 
Switch:
 TOR switch should support 1G/10G/40G links (either 4 of them or VLAN isolated to support 4 networks)
 
 Uplink from the Jump server to  Internet must be 1G or better.
 
 Public IP address Pool per pod (8)
 
 Private address pool per pod  3 x /24 subnets (either not shared or vlan isolated)
 
 Additional links/port to support Ceph(Swift + Cinder Vols) on atleast 3-nodes to 5-nodes for opnfv controller and other Pocs.
 
Additional requirements (if servers are offered as MaaS):
 
Example Pod Configuration