User Tools

Site Tools


get_started:get_started_work_environment

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
get_started:get_started_work_environment [2015/01/12 19:42]
Frank Brockners [Server configuration]
get_started:get_started_work_environment [2015/01/20 11:21] (current)
Christopher Price
Line 4: Line 4:
  
 BGS is to deploy to both bare metal as well as to a virtual environment. This requires a physical server environment for BGS.  BGS is to deploy to both bare metal as well as to a virtual environment. This requires a physical server environment for BGS. 
 +[[get_started:​get_started_work_environment:​scenarios|Possible hardware scenario'​s for OPNFV]]
  
  
Line 18: Line 19:
   * 1 x Jump Server/​Landing Server in which the installer runs in a VM (FUEL)   * 1 x Jump Server/​Landing Server in which the installer runs in a VM (FUEL)
  
-**Total**: A total of 35 servers (5 PODs with Servers each) is required.+**Total**: A total of 30 servers (5 PODs with Servers each) is required.
  
 ==== Server configuration ==== ==== Server configuration ====
Line 25: Line 26:
  
 **Server**: **Server**:
-  * CPU: Intel Xeon E5-2600 (or similar)+  * CPU: Intel Xeon E5-2600 (IvyBridge at least, ​or similar)
   * Disk: 4 x 500G-2T + 1 x 300GB SSD (leave some room for experiments) ​   * Disk: 4 x 500G-2T + 1 x 300GB SSD (leave some room for experiments) ​
      * First 2 disks should be combined to form a 1 TB virtual store for the OS/Software etc      * First 2 disks should be combined to form a 1 TB virtual store for the OS/Software etc
      * Remaining should be combined to form a virtual disk for CEPH storage.      * Remaining should be combined to form a virtual disk for CEPH storage.
-     * The 7'th disk (SSD) for distributed storage (CEPH) ​eval towards SSD technology.+     * The 5'th disk (SSD) for distributed storage (CEPH) ​journal ​towards SSD technology.
      * Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage      * Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage
   * Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)   * Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)
Line 37: Line 38:
   * Single Power supply active with spares in the Lab for power supply failure addressing.   * Single Power supply active with spares in the Lab for power supply failure addressing.
   * I/O   * I/O
-      * Option 1: 4x1G Control, ​2x10G Data, 48 Port Switch+      * Option 1: 4x1G Control, ​2x40G Data, 48 Port Switch
         * Connectivity to each network is through a  separate NIC that simplifies Switch Management. However, requires a more NICs on the server and also more switch ports.         * Connectivity to each network is through a  separate NIC that simplifies Switch Management. However, requires a more NICs on the server and also more switch ports.
           * 1 x 1G for ILMI  (Lights out Management )           * 1 x 1G for ILMI  (Lights out Management )
Line 43: Line 44:
           * 1 x 1G for control Plane connectivity           * 1 x 1G for control Plane connectivity
           * 1 x 1G for storage           * 1 x 1G for storage
-        ​* 2 x 10G/40G for data network (redundancy,​ NIC bonding, High bandwidth testing) +          * "​NICs"​ can be internal in case a blade server is used 
-      * Option II: 1x1G Control, ​2x1G Data, 24 Port Switch  ​+        ​* 2 x 40G (or 10G) for data network (redundancy,​ NIC bonding, High bandwidth testing) 
 +      * Option II: 1x1G Control, ​2x 40G (or 10G) Data, 24 Port Switch  ​
         * Connectivity to networks is through VLANs on the Control NIC.  Data NIC used for VNF traffic and storage traffic segmented through VLANs         * Connectivity to networks is through VLANs on the Control NIC.  Data NIC used for VNF traffic and storage traffic segmented through VLANs
 +        * "​NICs"​ can be internal in case a blade server is used
 +      * Option III: 2x1G Control, 2x10G Data, 2x40G Storage, 24 Port Switch  ​
 +        * Data NIC used for VNF traffic, storage NIC used for control plane and Storage segmented through VLANs (separate host traffic from VNF)
 +          * 1 x 1G for IPMI
 +          * 1 x 1G for Admin/PXE boot
 +          * 2 x 10G for control plane connectivity/​Storage
 +          * 2 x 40G (or 10G) for data network
 +          * "​NICs"​ can be internal in case a blade server is used
   * Power: Single power supply acceptable (redundant power not required/​nice to have)   * Power: Single power supply acceptable (redundant power not required/​nice to have)
  
get_started/get_started_work_environment.1421091772.txt.gz · Last modified: 2015/01/12 19:42 by Frank Brockners