User Tools

Site Tools


opnfv-orange

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
opnfv-orange [2015/02/10 13:27]
Arnaud Morin [Node distribution]
opnfv-orange [2016/03/16 10:09] (current)
David Blaisonneau [Testbed description]
Line 1: Line 1:
-====== ​Openstack Get Starting Installation ​======+====== ​Introduction ​======
  
-Openstack install and config by Orange ​Labs.+Orange ​will provide 1 multi-sites testbed. 2 nodes will be installed independently,​ 1 in Paris and 1 in Lannion (Brittany). In a first step the 2 nodes are independent and shall be considered as 2 different testbeds (different installation,​ no database synchronization,​..) but they could be used for multi-sites test cases in the future.
  
-===== Introduction =====+The goal is to grant remote access on virtualized capabilities (no bare metal resources). 
 +On each node an OPNFV solution will be installed based on [[https://​wiki.opnfv.org/​get_started|BGS project]].  
 +On top of this infrastructure several VNFs will be deployed. 
 +Access to the VNFs, tooling and compute resources will be possible for any community member.
  
-We provide tools and scripts to install a full Openstack Juno over Ubuntu 14.04 with OpenDayLight as SDN manager. This project is internally named OpenSteak.+The testbed shall be open during Q4 2015.
  
-**This is still a work in progress** 
  
-It aims to propose an High Availability deployment with Bare Metal provisioning.+===== Overall description =====
  
-The configuration is automatically done with **Puppet**, based on specific modules that rely on the stackforge ones (see https://​github.com/​stackforge/?​query=puppet).+The tesbed may be described as follow:
  
-To maintain the modules dependency up to date (and to download modules automatically),​ we use **r10k**.+{{ :​opnfv-orange.jpg?400 |}}
  
-The storage is handle by **Ceph**.+===== Remote access =====
  
-The only thing you should do is to provide ​valid **Hiera** configuration file.+==== Code of Conduct ==== 
 +This laboratory ​is a controlled environment made available to OPNFV community members by Orange for improving OPNFV solution and testing interoperability with other manufacturers’ equipment for the OPNFV community
  
-===== Status =====+This testing gives community members the advantages of working together to test interoperability. ​
  
-  * Puppet modules: MysqlRabbitKeystoneGlanceNova are OK. Neutron is still in WiP (Work in Progress) +Access to the laboratory is granted by the laboratory administrator. The procedure to get credentialsthe documentation (architectureuser guide)the testbed inventory (hardwarebare metal capabilities,​ available tooling) and the local policy ​(physical access, specific code of conductare described in the page below.
-  * Bare metal provisioning:​ WiP +
-  * High Availability:​ WiP +
-  * OpenDaylight integration:​ WiP+
  
-===== Architecture =====+Members are encouraged to share as many results as possible. The results of community tests would be collected into the [[https://​wiki.opnfv.org/​pharos|Pharos]] dashboard. The [[https://​wiki.opnfv.org/​pharos|Pharos project]] will disseminate the test results so any testbed of the federation could replay and consolidate test campaigns ​ with different combination of hardware/​software.
  
-==== Node distribution ====+Specific tests by members using proprietary components to help improving the overall quality of the OPNFV solution are possible. Members participating in such interoperability event, dry run, practice run, certification wave, wave rehearsals must not (either before, during or after the scheduled development lab use or interoperability event, dry run, practice run, certification wave, wave rehearsal or while performing daily tasks at any of Orange test development laboratories):​ 
 +  * Access any other members’ equipment or other facilities at Orange without expressed written permission from an authorized Orange’ project manager. 
 +  * Discuss any other members information and/or interoperability problems with third parties or the press without Orange’ prior written authorization. 
 +  * Reveal other members’ test results, either directly or implicitly, to his/her own company, third parties or the press. 
 +  * Share or copy confidential information or intellectual property other than in accordance with the applicable OPNFV policy.
  
-In a lab configuration,​ to avoid waste of resources+Testing ​resources ​reservation shall be managed at the testbed level by the testbed administrator. The TSC may re-prioritize test campaigns if necessary.
  
-  * All nodes are //compute// and //​storage//:​ they all contains nova-compute,​ neutron-compute and ceph OSD 
-  * 2 nodes are also //​controllers//​ containing KVM VMs for Openstack bricks, a DNS node and HAproxy 
-  * 1 node is a network gateway to external networks 
  
-{{ :​archi_reseau_20150209.png?​direct&​800 |}} +==== Access procedure ​====
-==== How do we handle openstack functions ? ====+
  
-Each controller part of Openstack is created separatly in a KVM machineSo that it can easily ​be updated or redeploy.+This environment is free to use by any OPNFV contributor or committer for the purpose ​of OPNFV approved activitiesAccess to this environment ​can be granted by sending a e-mail to: TBD
  
-Each KVM machine is automatically created by a script and basic configuration comes through cloud-init. Openstack related configuration is handled by puppet. 
  
-==== How do we plan to handle HA ? ====+subject: opnfv_orange_access. 
 + 
 +Following information should be provided in the request: 
 +  *     Full name 
 +  *     ​e-mail 
 +  *     ​Phone 
 +  *     ​Organization 
 +  *     PGP public key (preferably registered with a PGP PKI server) 
 +  *     SSH public key 
 + 
 +Granting access normally takes 2-3 business days. 
 + 
 +Detailed access descriptions will be provided with your access grant e-mail 
 + 
 +===== Testbed description ===== 
 + 
 +=== Hardware description === 
 + 
 +^ Node                     ^ Paris                                                                                                                                                                  ^ Lannion ​                                                                                                                                                          ^ 
 +| Control & Compute Nodes  | DELL PowerEdge 730 | HP DL380 gen9 | 
 +| Jumpstart server ​        ​| ​                                                                                                                                                                    ​| ​                                                                                                                                                                  | 
 +| PXE server ​              ​| ​                                                                                                                                                                       |                                                                                                                                                                   | 
 +| FW                       ​| ​                                                                                                                                                                       |                                                                                                                                                                   | 
 +| Switching ​               |                                                                                                                     EX 4550, 32-port 100M/1G/10G BaseT            | EX 4550, 32-port ​ 100M/1G/10G BaseT                                                                                                                               | 
 + 
 + 
 +Servers have been upgraded with 
 +  * proc:  
 +    * 2xIntel Xeon E5-2603 v3 (1.6 GHz, 15M Cache)  
 +    * 2xIntel Xeon E5-2699 v3 (2.3 GHz, 45M cache)  
 +  * Disk: 2xIntel SSD DC S3500 480GB 
 +  * NIC: Default + Intel Eth [[http://​ark.intel.com/​products/​58954/​Intel-Ethernet-Converged-Network-Adapter-X540-T2|X540T2]] 
 + 
 +=== Network Design === 
 + 
 +* [[opnfv-orange-pod2|Lannion Pod2]] 
 + 
 +=== VNFs === 
 + 
 +We planned ​to deploy several Test VNFs that could be used for CI 
 + 
 +=== Tooling === 
 + 
 +A tooling VNF including [[https://​wiki.openstack.org/​wiki/​Rally|Rally]] and [[http://​robotframework.org/​ | RobotFramework]] is planned. 
 + 
 + 
 +===== Installation procedure(s) ===== 
 + 
 +The detailed installation procedures of the 2 nodes are described here:   
 + 
 +  * [[opnfv-orange-paris|Installation OPNFV Orange Paris]] 
 +  * Installation OPNFV Orange Lannion: Joid procedure
  
-The work is still in progress, but we plan to use HAProxy in front of nodes, with VRRP IPs and weighted routes. 
  
-{{ :​opensteak_ha_20150209.png?​direct&​800 |}} 
  
-===== Links ===== 
-  * https://​github.com/​Orange-OpenSource/​opnfv 
-  * https://​github.com/​Orange-OpenSource/​opnfv-puppet 
-  * https://​github.com/​Orange-OpenSource/​opnfv-r10k 
opnfv-orange.1423574879.txt.gz · Last modified: 2015/02/10 13:27 by Arnaud Morin