User Tools

Site Tools


get_started:get_started_work_environment

Work environment for Bootstrap/Get-started

Note: The below is a first draft cut at a work environment for Bootstrap/Get-started (BGS).

BGS is to deploy to both bare metal as well as to a virtual environment. This requires a physical server environment for BGS. Possible hardware scenario's for OPNFV

Development Environment Layout

Assumption is that a total of 5 PODs is required:

  • 1 POD for Run/Verify
  • 1 POD for Merge
  • 3 PODs for Development (parallel development by multiple teams)

BGS targets a deployment with HA/cluster support. As a result, the following POD configuration is assumed:

  • 3 x Control node (for HA/clustered setup of OpenStack and OpenDaylight)
  • 2 x Compute node (to bring up/run VNFs)
  • 1 x Jump Server/Landing Server in which the installer runs in a VM (FUEL)

Total: A total of 30 servers (5 PODs with 6 Servers each) is required.

Server configuration

Typical server configuration (same server for all components of the POD assumed for reasons of simplicity)

Server:

  • CPU: Intel Xeon E5-2600 (IvyBridge at least, or similar)
  • Disk: 4 x 500G-2T + 1 x 300GB SSD (leave some room for experiments)
    • First 2 disks should be combined to form a 1 TB virtual store for the OS/Software etc
    • Remaining should be combined to form a virtual disk for CEPH storage.
    • The 5'th disk (SSD) for distributed storage (CEPH) journal towards SSD technology.
    • Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage
  • Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)
  • Access to console ports/lights-out-management through management tool and/or serial console server
  • Lights-out-management/Out-of-band management for power on/off/reset
  • Memory: >= 32G RAM (Minimum)
  • Single Power supply active with spares in the Lab for power supply failure addressing.
  • I/O
    • Option 1: 4x1G Control, 2x40G Data, 48 Port Switch
      • Connectivity to each network is through a separate NIC that simplifies Switch Management. However, requires a more NICs on the server and also more switch ports.
        • 1 x 1G for ILMI (Lights out Management )
        • 1 x 1G for Admin/PXE boot
        • 1 x 1G for control Plane connectivity
        • 1 x 1G for storage
        • "NICs" can be internal in case a blade server is used
      • 2 x 40G (or 10G) for data network (redundancy, NIC bonding, High bandwidth testing)
    • Option II: 1x1G Control, 2x 40G (or 10G) Data, 24 Port Switch
      • Connectivity to networks is through VLANs on the Control NIC. Data NIC used for VNF traffic and storage traffic segmented through VLANs
      • "NICs" can be internal in case a blade server is used
    • Option III: 2x1G Control, 2x10G Data, 2x40G Storage, 24 Port Switch
      • Data NIC used for VNF traffic, storage NIC used for control plane and Storage segmented through VLANs (separate host traffic from VNF)
        • 1 x 1G for IPMI
        • 1 x 1G for Admin/PXE boot
        • 2 x 10G for control plane connectivity/Storage
        • 2 x 40G (or 10G) for data network
        • "NICs" can be internal in case a blade server is used
  • Power: Single power supply acceptable (redundant power not required/nice to have)

Switch:

  • TOR switch should support 1G/10G/40G links (either 4 of them or VLAN isolated to support 4 networks)
  • Uplink from the Jump server to Internet must be 1G or better.
  • Public IP address Pool per pod (8)
  • Private address pool per pod 3 x /24 subnets (either not shared or vlan isolated)
  • Additional links/port to support Ceph(Swift + Cinder Vols) on atleast 3-nodes to 5-nodes for opnfv controller and other Pocs.

Additional requirements (if servers are offered as MaaS):

  • Console access
  • PXE boot cabable
  • Server of a POD connected by 40G switch

Example Pod Configuration

  • Cisco UCS 240 M3 Rack Mount Server (2RU)
    • CPU: Intel Xeon E5-2600 v2
    • 4x500G internal storage with Embedded Raid
    • PCIe Raid Controller
    • Matrox G200e video controller
    • One RJ45 serial port connector
    • Two USB 2.0 port connectors
    • One DB15 VGA connector
    • 32G RAM
    • I/O: 2 x 40G Data, 4x1G Control
    • Six hot-swappable fans for front-to-rear cooling
    • Single or Dual Power Supply
    • BMC running Cisco Integrated Management Controller (CIMC) firmware.
get_started/get_started_work_environment.txt · Last modified: 2015/01/20 11:21 by Christopher Price