This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
get_started:get_started_work_environment [2014/12/12 17:33] Frank Brockners [Interim Development Environment] |
get_started:get_started_work_environment [2015/01/20 11:21] (current) Christopher Price |
||
---|---|---|---|
Line 4: | Line 4: | ||
BGS is to deploy to both bare metal as well as to a virtual environment. This requires a physical server environment for BGS. | BGS is to deploy to both bare metal as well as to a virtual environment. This requires a physical server environment for BGS. | ||
+ | [[get_started:get_started_work_environment:scenarios|Possible hardware scenario's for OPNFV]] | ||
Line 16: | Line 17: | ||
* 3 x Control node (for HA/clustered setup of OpenStack and OpenDaylight) | * 3 x Control node (for HA/clustered setup of OpenStack and OpenDaylight) | ||
* 2 x Compute node (to bring up/run VNFs) | * 2 x Compute node (to bring up/run VNFs) | ||
- | * 1 x Fuel master | + | * 1 x Jump Server/Landing Server in which the installer runs in a VM (FUEL) |
- | **Total**: A total of 30 servers (5 PODs with 5 Servers each) is required. | + | **Total**: A total of 30 servers (5 PODs with 6 Servers each) is required. |
==== Server configuration ==== | ==== Server configuration ==== | ||
Line 24: | Line 25: | ||
Typical server configuration (same server for all components of the POD assumed for reasons of simplicity) | Typical server configuration (same server for all components of the POD assumed for reasons of simplicity) | ||
- | * CPU: Intel Xeon E5-2600 (or similar) | + | **Server**: |
- | * Disk: 500G-1T (leave some room for experiments) | + | * CPU: Intel Xeon E5-2600 (IvyBridge at least, or similar) |
+ | * Disk: 4 x 500G-2T + 1 x 300GB SSD (leave some room for experiments) | ||
+ | * First 2 disks should be combined to form a 1 TB virtual store for the OS/Software etc | ||
+ | * Remaining should be combined to form a virtual disk for CEPH storage. | ||
+ | * The 5'th disk (SSD) for distributed storage (CEPH) journal towards SSD technology. | ||
+ | * Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage | ||
* Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler) | * Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler) | ||
- | * Access to console ports through "iLO" or serial console server | + | * Access to console ports/lights-out-management through management tool and/or serial console server |
- | * IPMI/iLO or other OOB for power on/off/reset | + | * Lights-out-management/Out-of-band management for power on/off/reset |
- | * Memory: 32G RAM | + | * Memory: >= 32G RAM (Minimum) |
- | * I/O: 2 x 10/40G (at least for compute nodes, 40G is highly desirable for testing purposes) | + | * Single Power supply active with spares in the Lab for power supply failure addressing. |
- | * Centos 7 | + | * I/O |
+ | * Option 1: 4x1G Control, 2x40G Data, 48 Port Switch | ||
+ | * Connectivity to each network is through a separate NIC that simplifies Switch Management. However, requires a more NICs on the server and also more switch ports. | ||
+ | * 1 x 1G for ILMI (Lights out Management ) | ||
+ | * 1 x 1G for Admin/PXE boot | ||
+ | * 1 x 1G for control Plane connectivity | ||
+ | * 1 x 1G for storage | ||
+ | * "NICs" can be internal in case a blade server is used | ||
+ | * 2 x 40G (or 10G) for data network (redundancy, NIC bonding, High bandwidth testing) | ||
+ | * Option II: 1x1G Control, 2x 40G (or 10G) Data, 24 Port Switch | ||
+ | * Connectivity to networks is through VLANs on the Control NIC. Data NIC used for VNF traffic and storage traffic segmented through VLANs | ||
+ | * "NICs" can be internal in case a blade server is used | ||
+ | * Option III: 2x1G Control, 2x10G Data, 2x40G Storage, 24 Port Switch | ||
+ | * Data NIC used for VNF traffic, storage NIC used for control plane and Storage segmented through VLANs (separate host traffic from VNF) | ||
+ | * 1 x 1G for IPMI | ||
+ | * 1 x 1G for Admin/PXE boot | ||
+ | * 2 x 10G for control plane connectivity/Storage | ||
+ | * 2 x 40G (or 10G) for data network | ||
+ | * "NICs" can be internal in case a blade server is used | ||
+ | * Power: Single power supply acceptable (redundant power not required/nice to have) | ||
+ | |||
+ | **Switch**: | ||
+ | * TOR switch should support 1G/10G/40G links (either 4 of them or VLAN isolated to support 4 networks) | ||
+ | * Uplink from the Jump server to Internet must be 1G or better. | ||
+ | * Public IP address Pool per pod (8) | ||
+ | * Private address pool per pod 3 x /24 subnets (either not shared or vlan isolated) | ||
+ | * Additional links/port to support Ceph(Swift + Cinder Vols) on atleast 3-nodes to 5-nodes for opnfv controller and other Pocs. | ||
+ | |||
+ | |||
Additional requirements (if servers are offered as MaaS): | Additional requirements (if servers are offered as MaaS): | ||
* Console access | * Console access | ||
* PXE boot cabable | * PXE boot cabable | ||
- | ==== Example Server ==== | + | * Server of a POD connected by 40G switch |
+ | ==== Example Pod Configuration ==== | ||
- | * Cisco UCS 5108 Blade Server Chassis | + | * Cisco UCS 240 M3 Rack Mount Server (2RU) |
- | * 5 x B200 M3 | + | |
* CPU: Intel Xeon E5-2600 v2 | * CPU: Intel Xeon E5-2600 v2 | ||
- | * 1TB internal storage | + | * 4x500G internal storage with Embedded Raid |
+ | * PCIe Raid Controller | ||
+ | * Matrox G200e video controller | ||
+ | * One RJ45 serial port connector | ||
+ | * Two USB 2.0 port connectors | ||
+ | * One DB15 VGA connector | ||
* 32G RAM | * 32G RAM | ||
- | * I/O: 2 x 40G | + | * I/O: 2 x 40G Data, 4x1G Control |
+ | * Six hot-swappable fans for front-to-rear cooling | ||
+ | * Single or Dual Power Supply | ||
+ | * BMC running Cisco Integrated Management Controller (CIMC) firmware. |