User Tools

Site Tools


get_started:get_started_work_environment

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
get_started:get_started_work_environment [2015/01/09 12:15]
Frank Brockners [Development Environment Layout]
get_started:get_started_work_environment [2015/01/20 11:21] (current)
Christopher Price
Line 4: Line 4:
  
 BGS is to deploy to both bare metal as well as to a virtual environment. This requires a physical server environment for BGS.  BGS is to deploy to both bare metal as well as to a virtual environment. This requires a physical server environment for BGS. 
 +[[get_started:​get_started_work_environment:​scenarios|Possible hardware scenario'​s for OPNFV]]
  
  
Line 24: Line 25:
 Typical server configuration (same server for all components of the POD assumed for reasons of simplicity) Typical server configuration (same server for all components of the POD assumed for reasons of simplicity)
  
-  ​* CPU: Intel Xeon E5-2600 (or similar) +**Server**:​ 
-  * Disk: x 500G-3T (leave some room for experiments) ​+  ​* CPU: Intel Xeon E5-2600 (IvyBridge at least, ​or similar) 
 +  * Disk: x 500G-2T + 1 x 300GB SSD (leave some room for experiments) ​
      * First 2 disks should be combined to form a 1 TB virtual store for the OS/Software etc      * First 2 disks should be combined to form a 1 TB virtual store for the OS/Software etc
-     ​* ​The remaining disks should be combined to form a 2 TB storage ​space (CEPH) +     ​* ​Remaining ​should be combined to form a virtual disk for CEPH storage. 
-     * Performance testing requires a mix of compute nodes that have CEPH and without CEPH storage +     * The 5'th disk (SSD) for distributed ​storage (CEPH) ​journal towards SSD technology. 
-   ​* Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)+     * Performance testing requires a mix of compute nodes that have CEPH(swift+Cinder) ​and without CEPH storage 
 +  * Virtual ISO boot capabilities or a separate PXE boot server (DHCP/tftp or Cobbler)
   * Access to console ports/​lights-out-management through management tool and/or serial console server   * Access to console ports/​lights-out-management through management tool and/or serial console server
   * Lights-out-management/​Out-of-band management for power on/​off/​reset   * Lights-out-management/​Out-of-band management for power on/​off/​reset
   * Memory: >= 32G RAM (Minimum)   * Memory: >= 32G RAM (Minimum)
-  * I/O: 4 x 1G Control ​Nics2 x 10/40G data NICS (at least for compute nodes for testing) +  ​* Single Power supply active with spares in the Lab for power supply failure addressing. 
-      * 1 x 1G for ILMI  (Lights out Management ) +  ​* I/O 
-      * 1 x 1G for Admin/PXE boot +      * Option 14x1G Control, ​2x40G Data, 48 Port Switch 
-      * 1 x 1G for control Plane connectivity +        * Connectivity to each network is through a  separate NIC that simplifies Switch Management. However, requires a more NICs on the server and also more switch ports. 
-      * 1 x 1G for storage +          ​* 1 x 1G for ILMI  (Lights out Management ) 
-      * 2 x 10G/40G for data network (redundancy,​ NIC bonding, High bandwidth testing) +          * 1 x 1G for Admin/PXE boot 
- * TOR switch should support 1G/10G/40G links (either 4 of them or VLAN isolated to support 4 networks) +          * 1 x 1G for control Plane connectivity 
- * Uplink from the Jump server to  Internet must be 1G or better. +          * 1 x 1G for storage 
- * Public IP address Pool per pod (8) +          * "​NICs"​ can be internal in case a blade server is used 
- * Private address pool per pod  3 x /24 subnets (either not shared or vlan isolated)+        ​* 2 x 40G (or 10G) for data network (redundancy,​ NIC bonding, High bandwidth testing) 
 +      * Option II: 1x1G Control, 2x 40G (or 10G) Data, 24 Port Switch ​  
 +        * Connectivity to networks is through VLANs on the Control NIC.  Data NIC used for VNF traffic and storage traffic segmented through VLANs 
 +        * "​NICs"​ can be internal in case a blade server is used 
 +      * Option III: 2x1G Control, 2x10G Data, 2x40G Storage, 24 Port Switch ​  
 +        * Data NIC used for VNF traffic, storage NIC used for control plane and Storage segmented through VLANs (separate host traffic from VNF) 
 +          * 1 x 1G for IPMI 
 +          * 1 x 1G for Admin/PXE boot 
 +          * 2 x 10G for control plane connectivity/​Storage 
 +          * 2 x 40G (or 10G) for data network 
 +          * "​NICs"​ can be internal in case a blade server is used 
 +  * Power: Single power supply acceptable (redundant power not required/​nice to have) 
 + 
 +**Switch**:​ 
 +  ​* TOR switch should support 1G/10G/40G links (either 4 of them or VLAN isolated to support 4 networks) 
 +  * Uplink from the Jump server to  Internet must be 1G or better. 
 +  * Public IP address Pool per pod (8) 
 +  * Private address pool per pod  3 x /24 subnets (either not shared or vlan isolated) 
 +  * Additional links/port to support Ceph(Swift + Cinder Vols) on atleast 3-nodes to 5-nodes for opnfv controller and other Pocs.
    
  
Line 51: Line 72:
   * PXE boot cabable   * PXE boot cabable
   * Server of a POD connected by 40G switch   * Server of a POD connected by 40G switch
-==== Example ​Server ​====+==== Example ​Pod Configuration ​====
  
-  * Cisco UCS 5108 Blade Server Chassis +  * Cisco UCS 240 M3 Rack Mount Server (2RU)
-  * 5  x B200 M3+
     * CPU: Intel Xeon E5-2600 v2     * CPU: Intel Xeon E5-2600 v2
-    * 1TB internal storage+    * 4x500G ​internal storage ​with Embedded Raid 
 +    * PCIe Raid Controller 
 +    * Matrox G200e video controller 
 +    * One RJ45 serial port connector 
 +    * Two USB 2.0 port connectors 
 +    * One DB15 VGA connector
     * 32G RAM     * 32G RAM
-    * I/O: 2 x 40G+    * I/O: 2 x 40G Data, 4x1G Control 
 +    * Six hot-swappable fans for front-to-rear cooling 
 +    * Single or Dual Power Supply 
 +    * BMC running Cisco Integrated Management Controller (CIMC) firmware.
get_started/get_started_work_environment.1420805711.txt.gz · Last modified: 2015/01/09 12:15 by Frank Brockners