User Tools

Site Tools


get_started:get_started_work_environment

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
get_started:get_started_work_environment [2015/01/10 18:46]
Jonas Bjurel [Server configuration]
get_started:get_started_work_environment [2015/01/20 11:21] (current)
Christopher Price
Line 4: Line 4:
  
 BGS is to deploy to both bare metal as well as to a virtual environment. This requires a physical server environment for BGS.  BGS is to deploy to both bare metal as well as to a virtual environment. This requires a physical server environment for BGS. 
 +[[get_started:​get_started_work_environment:​scenarios|Possible hardware scenario'​s for OPNFV]]
  
  
Line 18: Line 19:
   * 1 x Jump Server/​Landing Server in which the installer runs in a VM (FUEL)   * 1 x Jump Server/​Landing Server in which the installer runs in a VM (FUEL)
  
-**Total**: A total of 35 servers (5 PODs with Servers each) is required.+**Total**: A total of 30 servers (5 PODs with Servers each) is required.
  
 ==== Server configuration ==== ==== Server configuration ====
Line 25: Line 26:
  
 **Server**: **Server**:
-  * CPU: Intel Xeon E5-2600 (or similar) +  * CPU: Intel Xeon E5-2600 (IvyBridge at least, ​or similar) 
-  * Disk: x 500G-3T + 1 x 300GB SSD (leave some room for experiments) ​+  * Disk: x 500G-2T + 1 x 300GB SSD (leave some room for experiments) ​
      * First 2 disks should be combined to form a 1 TB virtual store for the OS/Software etc      * First 2 disks should be combined to form a 1 TB virtual store for the OS/Software etc
-     ​* ​The 4 next disks should be combined to form a 2 TB storage ​space (CEPH) +     ​* ​Remaining ​should be combined to form a virtual disk for CEPH storage. 
-     * The 7'th disk (SSD) for distributed storage (CEPH) ​eval towards SSD technology.+     * The 5'th disk (SSD) for distributed storage (CEPH) ​journal ​towards SSD technology.
      * Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage      * Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) and without CEPH storage
-   * Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)+  ​* Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)
   * Access to console ports/​lights-out-management through management tool and/or serial console server   * Access to console ports/​lights-out-management through management tool and/or serial console server
   * Lights-out-management/​Out-of-band management for power on/​off/​reset   * Lights-out-management/​Out-of-band management for power on/​off/​reset
   * Memory: >= 32G RAM (Minimum)   * Memory: >= 32G RAM (Minimum)
-  * I/O: 4 x 1G Control ​Nics2 x 10/40G data NICS (at least for compute nodes for testing) +  ​* Single Power supply active with spares in the Lab for power supply failure addressing. 
-      * 1 x 1G for ILMI  (Lights out Management ) +  ​* I/O 
-      * 1 x 1G for Admin/PXE boot +      * Option 14x1G Control, ​2x40G Data, 48 Port Switch 
-      * 1 x 1G for control Plane connectivity +        * Connectivity to each network is through a  separate NIC that simplifies Switch Management. However, requires a more NICs on the server and also more switch ports. 
-      * 1 x 1G for storage +          ​* 1 x 1G for ILMI  (Lights out Management ) 
-      * 2 x 10G/40G for data network (redundancy,​ NIC bonding, High bandwidth testing)+          * 1 x 1G for Admin/PXE boot 
 +          * 1 x 1G for control Plane connectivity 
 +          * 1 x 1G for storage 
 +          * "​NICs"​ can be internal in case a blade server is used 
 +        ​* 2 x 40G (or 10G) for data network (redundancy,​ NIC bonding, High bandwidth testing
 +      * Option II: 1x1G Control, 2x 40G (or 10G) Data, 24 Port Switch ​  
 +        * Connectivity to networks is through VLANs on the Control NIC.  Data NIC used for VNF traffic and storage traffic segmented through VLANs 
 +        * "​NICs"​ can be internal in case a blade server is used 
 +      * Option III: 2x1G Control, 2x10G Data, 2x40G Storage, 24 Port Switch ​  
 +        * Data NIC used for VNF traffic, storage NIC used for control plane and Storage segmented through VLANs (separate host traffic from VNF) 
 +          * 1 x 1G for IPMI 
 +          * 1 x 1G for Admin/PXE boot 
 +          * 2 x 10G for control plane connectivity/​Storage 
 +          * 2 x 40G (or 10G) for data network 
 +          * "​NICs"​ can be internal in case a blade server is used 
 +  * Power: Single power supply acceptable (redundant power not required/​nice to have)
  
 **Switch**: **Switch**:
Line 56: Line 72:
   * PXE boot cabable   * PXE boot cabable
   * Server of a POD connected by 40G switch   * Server of a POD connected by 40G switch
-==== Example ​Server ​====+==== Example ​Pod Configuration ​====
  
-  * Cisco UCS 5108 Blade Server Chassis +  * Cisco UCS 240 M3 Rack Mount Server (2RU)
-  * 5  x B200 M3+
     * CPU: Intel Xeon E5-2600 v2     * CPU: Intel Xeon E5-2600 v2
-    * 1TB internal storage+    * 4x500G ​internal storage ​with Embedded Raid 
 +    * PCIe Raid Controller 
 +    * Matrox G200e video controller 
 +    * One RJ45 serial port connector 
 +    * Two USB 2.0 port connectors 
 +    * One DB15 VGA connector
     * 32G RAM     * 32G RAM
-    * I/O: 2 x 40G+    * I/O: 2 x 40G Data, 4x1G Control 
 +    * Six hot-swappable fans for front-to-rear cooling 
 +    * Single or Dual Power Supply 
 +    * BMC running Cisco Integrated Management Controller (CIMC) firmware.
get_started/get_started_work_environment.1420915605.txt.gz · Last modified: 2015/01/10 18:46 by Jonas Bjurel