This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
get_started:get_started_work_environment [2015/01/13 14:12] Frank Brockners [Server configuration] |
get_started:get_started_work_environment [2015/01/20 11:21] (current) Christopher Price |
||
---|---|---|---|
Line 4: | Line 4: | ||
BGS is to deploy to both bare metal as well as to a virtual environment. This requires a physical server environment for BGS. | BGS is to deploy to both bare metal as well as to a virtual environment. This requires a physical server environment for BGS. | ||
+ | [[get_started:get_started_work_environment:scenarios|Possible hardware scenario's for OPNFV]] | ||
Line 18: | Line 19: | ||
* 1 x Jump Server/Landing Server in which the installer runs in a VM (FUEL) | * 1 x Jump Server/Landing Server in which the installer runs in a VM (FUEL) | ||
- | **Total**: A total of 35 servers (5 PODs with 7 Servers each) is required. | + | **Total**: A total of 30 servers (5 PODs with 6 Servers each) is required. |
==== Server configuration ==== | ==== Server configuration ==== | ||
Line 29: | Line 30: | ||
* First 2 disks should be combined to form a 1 TB virtual store for the OS/Software etc | * First 2 disks should be combined to form a 1 TB virtual store for the OS/Software etc | ||
* Remaining should be combined to form a virtual disk for CEPH storage. | * Remaining should be combined to form a virtual disk for CEPH storage. | ||
- | * The 7'th disk (SSD) for distributed storage (CEPH) journal towards SSD technology. | + | * The 5'th disk (SSD) for distributed storage (CEPH) journal towards SSD technology. |
* Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage | * Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage | ||
* Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler) | * Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler) | ||
Line 48: | Line 49: | ||
* Connectivity to networks is through VLANs on the Control NIC. Data NIC used for VNF traffic and storage traffic segmented through VLANs | * Connectivity to networks is through VLANs on the Control NIC. Data NIC used for VNF traffic and storage traffic segmented through VLANs | ||
* "NICs" can be internal in case a blade server is used | * "NICs" can be internal in case a blade server is used | ||
- | * Option III: 2x1G Control, 2x10G Data, 2x10G Storage, 24 Port Switch | + | * Option III: 2x1G Control, 2x10G Data, 2x40G Storage, 24 Port Switch |
* Data NIC used for VNF traffic, storage NIC used for control plane and Storage segmented through VLANs (separate host traffic from VNF) | * Data NIC used for VNF traffic, storage NIC used for control plane and Storage segmented through VLANs (separate host traffic from VNF) | ||
* 1 x 1G for IPMI | * 1 x 1G for IPMI |