This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
get_started:get_started_work_environment [2015/01/12 17:41] Palani Chinnakannan |
get_started:get_started_work_environment [2015/01/20 11:21] (current) Christopher Price |
||
---|---|---|---|
Line 4: | Line 4: | ||
BGS is to deploy to both bare metal as well as to a virtual environment. This requires a physical server environment for BGS. | BGS is to deploy to both bare metal as well as to a virtual environment. This requires a physical server environment for BGS. | ||
+ | [[get_started:get_started_work_environment:scenarios|Possible hardware scenario's for OPNFV]] | ||
Line 18: | Line 19: | ||
* 1 x Jump Server/Landing Server in which the installer runs in a VM (FUEL) | * 1 x Jump Server/Landing Server in which the installer runs in a VM (FUEL) | ||
- | **Total**: A total of 35 servers (5 PODs with 7 Servers each) is required. | + | **Total**: A total of 30 servers (5 PODs with 6 Servers each) is required. |
==== Server configuration ==== | ==== Server configuration ==== | ||
Line 25: | Line 26: | ||
**Server**: | **Server**: | ||
- | * CPU: Intel Xeon E5-2600 (or similar) | + | * CPU: Intel Xeon E5-2600 (IvyBridge at least, or similar) |
* Disk: 4 x 500G-2T + 1 x 300GB SSD (leave some room for experiments) | * Disk: 4 x 500G-2T + 1 x 300GB SSD (leave some room for experiments) | ||
* First 2 disks should be combined to form a 1 TB virtual store for the OS/Software etc | * First 2 disks should be combined to form a 1 TB virtual store for the OS/Software etc | ||
* Remaining should be combined to form a virtual disk for CEPH storage. | * Remaining should be combined to form a virtual disk for CEPH storage. | ||
- | * The 7'th disk (SSD) for distributed storage (CEPH) eval towards SSD technology. | + | * The 5'th disk (SSD) for distributed storage (CEPH) journal towards SSD technology. |
* Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage | * Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage | ||
* Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler) | * Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler) | ||
Line 35: | Line 36: | ||
* Lights-out-management/Out-of-band management for power on/off/reset | * Lights-out-management/Out-of-band management for power on/off/reset | ||
* Memory: >= 32G RAM (Minimum) | * Memory: >= 32G RAM (Minimum) | ||
+ | * Single Power supply active with spares in the Lab for power supply failure addressing. | ||
* I/O | * I/O | ||
- | * Option 1: 4x1G Control, 2x10G Data, 48 Port Switch | + | * Option 1: 4x1G Control, 2x40G Data, 48 Port Switch |
* Connectivity to each network is through a separate NIC that simplifies Switch Management. However, requires a more NICs on the server and also more switch ports. | * Connectivity to each network is through a separate NIC that simplifies Switch Management. However, requires a more NICs on the server and also more switch ports. | ||
* 1 x 1G for ILMI (Lights out Management ) | * 1 x 1G for ILMI (Lights out Management ) | ||
Line 42: | Line 44: | ||
* 1 x 1G for control Plane connectivity | * 1 x 1G for control Plane connectivity | ||
* 1 x 1G for storage | * 1 x 1G for storage | ||
- | * 2 x 10G/40G for data network (redundancy, NIC bonding, High bandwidth testing) | + | * "NICs" can be internal in case a blade server is used |
- | * Option II: 1x1G Control, 2x1G Data, 24 Port Switch | + | * 2 x 40G (or 10G) for data network (redundancy, NIC bonding, High bandwidth testing) |
+ | * Option II: 1x1G Control, 2x 40G (or 10G) Data, 24 Port Switch | ||
* Connectivity to networks is through VLANs on the Control NIC. Data NIC used for VNF traffic and storage traffic segmented through VLANs | * Connectivity to networks is through VLANs on the Control NIC. Data NIC used for VNF traffic and storage traffic segmented through VLANs | ||
+ | * "NICs" can be internal in case a blade server is used | ||
+ | * Option III: 2x1G Control, 2x10G Data, 2x40G Storage, 24 Port Switch | ||
+ | * Data NIC used for VNF traffic, storage NIC used for control plane and Storage segmented through VLANs (separate host traffic from VNF) | ||
+ | * 1 x 1G for IPMI | ||
+ | * 1 x 1G for Admin/PXE boot | ||
+ | * 2 x 10G for control plane connectivity/Storage | ||
+ | * 2 x 40G (or 10G) for data network | ||
+ | * "NICs" can be internal in case a blade server is used | ||
+ | * Power: Single power supply acceptable (redundant power not required/nice to have) | ||
**Switch**: | **Switch**: | ||
Line 60: | Line 72: | ||
* PXE boot cabable | * PXE boot cabable | ||
* Server of a POD connected by 40G switch | * Server of a POD connected by 40G switch | ||
- | ==== Example Server ==== | + | ==== Example Pod Configuration ==== |
- | * Cisco UCS 5108 Blade Server Chassis | + | * Cisco UCS 240 M3 Rack Mount Server (2RU) |
- | * 5 x B200 M3 | + | |
* CPU: Intel Xeon E5-2600 v2 | * CPU: Intel Xeon E5-2600 v2 | ||
- | * 1TB internal storage | + | * 4x500G internal storage with Embedded Raid |
+ | * PCIe Raid Controller | ||
+ | * Matrox G200e video controller | ||
+ | * One RJ45 serial port connector | ||
+ | * Two USB 2.0 port connectors | ||
+ | * One DB15 VGA connector | ||
* 32G RAM | * 32G RAM | ||
- | * I/O: 2 x 40G | + | * I/O: 2 x 40G Data, 4x1G Control |
+ | * Six hot-swappable fans for front-to-rear cooling | ||
+ | * Single or Dual Power Supply | ||
+ | * BMC running Cisco Integrated Management Controller (CIMC) firmware. |