This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
oscar:project_proposal [2014/12/18 09:04] Parviz Yegani |
oscar:project_proposal [2015/04/03 15:53] (current) Stuart Mackie [Table] |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Project: OSCAR ====== | + | ====== Project: System Configuration And Reporting (OSCAR) ====== |
- | * //Proposed name for the project:// **OSCAR** (OPNFV System Configuration And Reporting) | + | * //Proposed name for the project:// **OSCAR** (OPNFV System Configuration And Reporting) |
- | * //Proposed name for the repository:// repo-oscar | + | * //Proposed name for the repository:// oscar |
- | * //Project Categories:// Integration & Testing, Collaborative Development | + | * //Project Categories:// Collaborative Development |
====== Project Description ====== | ====== Project Description ====== | ||
- | Project “OSCAR” provides a solution to automatically install and configure the required components for OPNFV platform deployments, using existing installer and configuration tools and perform a set of basic system level tests. OSCAR is unique in the scope of OPNFV Release 1 projects as it focuses on the tools and process enabling service providers to install OPFNV based NFVI platforms, and validate the installations. | + | Project “OSCAR” provides a deployment platform for automatically installing and configuring the software components of OPNFV systems. It is intended to be used by operators who are deploying OPNFV in integration labs and in production networks, and is intended to be part of, and integrated with, the OSS environment. The input to OSCAR will be the set of images and packages (with associated metadata) created by the Octopus CI environment (or other sources) as shown in the diagram below. |
- | OSCAR will support the installation/configuration/test of OPNFV reference implementations based upon a variety of components, depending upon the project support of OPNFV members provided for component combinations in the project. These variations could for example leverage different SDN controllers or approaches to compute infrastructure (e.g. hypervisor vs container). In Release 1, at least one particular combination is proposed and supported by the project contributors so far, as shown in the table below. However this project will explicitly support other component combinations in Release 1 and beyond, given contributor support. | + | {{ :oscar:oscar_in_opnfv v2.png?nolink |}} |
- | The OSCAR project will deliver the ability to install, configure, and test OPNFV reference implementations **such as** that proposed in the “**Technology Proposed - Release 1**” column in the table below. This proposal targets an installation on a virtual environment based on Ubuntu/Trusty as the base operating system and distribution. It includes a set of components that have already been tested and the overall functionality of the aggregate software components has been validated. As such, this is one possible OPNFV reference implementation which can serve as a framework to fast track the continuous integration of various components for the first release (targeted for March 2015). | + | A primary goal of OSCAR is to support the installation of a variety of OPNFV software stacks that are based upon on different underlying components, since each operator may have their own preferences based on their own unique business and technical requirements. These variations could, for example, leverage different SDN controllers or approaches to compute infrastructure (e.g. hypervisor vs container). |
- | OSCAR’s installation test capability will be used to validate basic installation and operation of a few example VNFs (listed below). Through these VNFs, OSCAR will enable validation of the integrity and functionality of OPNFV reference implementation installations. | + | The OSCAR project is described in detail in this presentation: {{:oscar:oscar_overview_v8.pptx|}} |
- | OSCAR is intended to be closely aligned with the OPNFV TSC vision for the first release. By focusing on the installation, configuration, and validation of OPNFV reference implementations, OSCAR will help fast track the integration of various combinations of core open source network components, help OPNFV learn from their differences and commonality of deployment experience, and feed that back into producing a more flexible implementation framework. | + | In Release 1 of OSCAR, it is proposed to support two OPNFV stacks, in order to demonstrate operation with multiple stacks. The stacks are described in the following table: |
- | ====== Scope ====== | + | ^ OPNFV Component ^ Stack1 ^ Stack2 ^ |
- | For the reference implementation proposed in the “Technology Proposed - Release 1” column in the table below, the scope of the NFVI platform functions supported by the OSCAR project are illustrated in the attached figure (see slide 5) {{:oscar:oscar_overview_v6.pptx|}} . As the table indicates all the features and functionality are part of the first release of the OPNFV. Nearly all these features have been tested and integrated within OpenContrail, an open source platform under the Apache 2.0 license. Various POCs and pre-production version of the OpenContrail have been demonstrated in numerous lab and production systems. | + | | Virtual Infrastructure Manager (VIM) | OpenStack (Juno) | OpenStack (Juno) | |
+ | | Network Controller | OpenDaylight | OpenContrail | | ||
+ | | Compute OS | Ubuntu, Debian, Fedora | Centos, Ubuntu | | ||
+ | | Virtualization | KVM/QEMU | KVM/QEMU, Docker | | ||
+ | | Virtual Networking | Open vSwitch | OpenContrail vRouter | | ||
+ | | Preloaded VNFs | As per BSG + TBD | As per BSG + TBD | | ||
+ | | Installation and Orchestration | TBD | TBD | | ||
- | Other reference implementation alternatives will also be supported under OSCAR, as demonstrating the ability to install/configure/test various OPNFV reference platforms is a key goal of OSCAR. | ||
- | **Related Projects** | ||
- | * **Octopus** provides an environment in which OPNFV reference implementations (platform builds based upon various combinations of components from upstream projects) can be maintained in sync with the upstream projects, and built from those source components into a cohesive OPNFV reference implementation that can be used in other projects, e.g. OSCAR and the test/integration projects. | ||
- | * **Get-started** assembles and tests a base set of infrastructure components for OPNFV to run a few example VNFs by integrating binaries of the stable versions of the used components, as a quick “get off the ground” activity. | ||
- | * OSCAR is also dependent upon the release cadencing of the **Simultaneous Release** project. | ||
- | ^Technology Area Requirements^Sub-Area Definition^Technology Proposed - Release 1 ^Release 2 Plans^ | ||
- | |** Physical Infrastructure Design **|Server-Network Connectivity|Reference simple Cluster Design with connectivity specification|Support multitrack Cluster Design| | ||
- | | |Network Gateway|Network Gateway interface to L3VPN, L2VPN, EVPN and Internet| | ||
- | |** Physical Infrastructure Configuration/Imaging **|Server Imaging/Configuration | Server Manager: Cobbler based imaging; Puppet based configuration | | | ||
- | | |Network Device Imaging/Configuration|Netconf based device configuration management|ODL based physical device config management| | ||
- | | **Virtual Infrastructure Orchestration** |Compute Orchestration|OpenStack, Juno|Support Docker for OpenStack, evaluate other alternative orchestration mechanisms | | ||
- | | | Storage Orchestration |Ceph based distributed block storage |Support one more block storage of community choice| | ||
- | | | Network Orchestration |OpenContrail SDN Controller | | | ||
- | | **Server OS ** | | Ubuntu, Trusty | | ||
- | | **Server Hypervisor ** | | KVM - Qemu | | ||
- | | ** Virtual Network Device on Server ** | | OpenContrail - vRouter(DPDK) |Support OVS forwarding in kernel| | ||
- | | ** Support for Physical Appliance/Baremetal Server ** | | VTEP termination on ToR switch using OVSDB |Standard based EVP+VXLAN support on ToR| | ||
- | | ** Virtual Infrastructure Availability ** |Orchestration Controller Availability | High Availability with Active-N (N=3) mechanism | | ||
- | | |SDN Controller Availability | High Availability with Active-N (N=3) mechanism | | ||
- | |** Service Orchestration ** |VNF Initiation | Heat Template based Virtual Network and Service Chain creation; Support TOSCA based workflow| | ||
- | | |VNF Configuration | Individual vendor EMS based VNF configuration; YANG model based VNF configuration from ODL| | ||
- | | | Group Based Policy support|Instrument one use case to support group based policy| | ||
- | |** Service Scaling ** |Horizontal Scaling of Service | API based horizontal scaling of services | | ||
- | | |Vertical Scaling of Service | On-demand resource augmentation of VNF | | ||
- | |** Traffic Steering ** |Traffic steering through transparent Services | API based creation of transparent (bump-in-the-wire) Service Chain between two networks| | ||
- | | |Traffic steering through Services with L3 processing | API based creation of L3 processed Service Chain between two networks | | ||
- | | |Traffic steering through multiple virtual services| API based creation of multiple virtualized services between two networks | | ||
- | | |Traffic steering through virtual and physical services| API and Netconf based traffic steering through Virtualized and Physical Appliances | | ||
- | |** User Interface ** |Creation of Service Chains | GUI or API based orchestration of Service Chains | | ||
- | | |Operation and Management of Cluster | GUI or API based Operation and Management of Cluster | | ||
- | | |CLI Interface | Standard CLI based Operations | | ||
- | |** Operational Support ** |Physical Server Infrastructure Monitoring | CPU, Mem, NIC, Environment, Events syslog monitoring| | | ||
- | | |Physical Network Infrastructure Monitoring | Device Environment, Interface utilization, error rate monitoring| | | ||
- | | |Virtual Infrastructure Monitoring | vCPU, vMem, vNIC, Virtual Network Traffic monitoring| | | ||
- | | |Traffic flow Diagnostics|Endpoint reachability testing; Flow traceroute diagnostics| | | ||
- | | |Service Provisioning Interface| GUI or API based orchestration of Service Chains| | | ||
- | | |Diagnostic | Endpoint reachability testing| | ||
- | |** Data Collection & Analytics ** |Log Collection | Service Logs & Syslogs | | | ||
- | | |Flow Record | 1:1 flow record collection | | | ||
- | | |Packet Capture | API driven on-demand full packet capture of any flow | | ||
- | | |Flow Path | Correlate overlay and underlay data to trace flow path | | ||
+ | OSCAR will validate each installation once it is completed, and reports will be available to show the configuration and status of each installation under OSCAR’s control. | ||
- | ====== Targeted Test Cases ====== | + | Later releases of OSCAR will implement the OPNFV platform lifecycle for each installation under management, including component upgrades, scaling and site migration. Images and packages from Collaborative Development projects will be included in OPNFV stacks managed by OSCAR as they become released through Octopus. Additionally, it will be desirable to support configuration of network infrastructure to provide connectivity from physical networks to the virtual infrastructure supporting the VNFs. |
- | OSCAR provides automated testing tools for installation of various components as well as health check for individual components. This includes (but not limited to): | + | OSCAR will be based on a set of open source management tools, selected according to best-fit for specific tasks. Available tools include: Cobbler, Puppet, Chef, Ansible and several others. The overall workflow for deployment is likely to be based on TOSCA. |
- | * OpenStack health check, | + | |
- | * OpenContrail health check, | + | |
- | * Life cycle management and automated system-level testing: CRUD operations for multiple instances of VNFs, and | + | |
- | * Life cycle management of associated network services. | + | |
- | ====== VNFs Served by OSCAR:====== | + | Documentation will be provided to enable users to edit the components of an OPNFV stack or to create new stacks. This will include specification of component images and packages, scaling rules, high availability configurations, and health check metrics and methods. |
- | Applicability for vCPE and mobility/wireline subscriber networks | + | |
- | * Stateful firewall | + | |
- | * Virtual PE | + | |
- | * Media cache/TCP proxy | + | |
- | * Application load balancer | + | |
- | * vEPC (SGW/PGW) | + | |
- | * vMME | + | |
- | * Session Border Controller (vSBC) | + | |
- | * Video optimization | + | |
- | * 3rd Party VNFs | + | |
- | ====== Testability: ''(optional, Project Categories: Integration & Testing)'' ====== | ||
- | Information regarding testing and integration including interoperability, scalability, high availability are provided in the above table. Any additional information for quality assurance and test resources will be available if necessary. The test cases include: | + | ====== Scope ====== |
+ | The primary focus of OSCAR is managing pre-production and production deployments in operator labs and networks. The intent is to develop a system which provides “recipes” or templates that can be used to create OPNFV stacks whose configurations match desired levels of scalability and availability as required by individual operators and for each specific deployment. | ||
- | * Create templates that allow supported stacks to be built at different scales (POC/test/production) | + | **Related Projects** |
- | * Test deployment at various scales | + | |
- | * Test preloading of VNFs | + | |
- | ====== Documentation: ''(optional, Project Categories: Documentation)'' ====== | + | * **Octopus** provides an environment in which OPNFV components (platform builds based upon various combinations of components from upstream projects) can be maintained in sync with the upstream projects, and built from those source components into a cohesive set of images and packages, with associated metadata, that form the input artifacts for test and deployment projects, e.g. OSCAR and the test/integration project (Pharos). Since OSCAR is itself an OPNFV component, it will ultimately have its own lifecycle within Octopus. |
+ | * **Get-started** will provide a capability to quickly deploy a base configuration of components of OPNFV in order to support developer testing of OPNFV stacks and to aid Octopus CI development. OSCAR will use the BGS stack as one of its initially supported stacks in order to exercise the ability to deploy at scale and on multiple sites. | ||
+ | * OSCAR will follow the release cadencing of the **Simultaneous Release** project, once it is in place. | ||
- | All API-related documents will be available in a timely manner. Detailed description of the Functional Architecture (building blocks, reference points, interfaces and protocols, work flow diagrams, etc.) will be provided during the development, integration and testing processes. | + | |
+ | ====== Test Cases ====== | ||
+ | OSCAR will provide automated tools for installation of various components as well as health check for individual components. Testing will include: | ||
+ | |||
+ | **Release 1** | ||
+ | * Deploy specified OPNFV stack onto designated bare metal or virtual servers (inc. preload VNF images and Heat templates) | ||
+ | * Validate deployment | ||
+ | * Report on configuration and status (health check) of specified deployments | ||
+ | |||
+ | **Release 2+** | ||
+ | * Life cycle management for OPNFV stack (scale, upgrade, migrate) | ||
+ | * Management of preloaded VNFs | ||
+ | * Test configuration of physical network for connectivity to VNFs | ||
+ | * Edit stacks and create new stacks | ||
+ | |||
+ | ====== Documentation ====== | ||
+ | |||
+ | Documentation will be provided that describes the OSCAR architecture, components, supported stacks, installation procedure, user interface, APIs, and configuration files. | ||
====== Dependencies: ====== | ====== Dependencies: ====== | ||
The OSCAR project relies on the following open source projects: | The OSCAR project relies on the following open source projects: | ||
- | * OpenStack Juno release: various components including Nova, Neutron, Ceilometer, Heat, etc., | + | * OpenStack Juno release: various components including Nova, Neutron, Ceilometer, Heat, etc. |
- | * OpenContrail: vRouter and other components, | + | * OpenDaylight: Open vSwitch and controller |
- | * Installer: Cobbler, | + | * OpenContrail: vRouter and controller |
- | * Configuration & Management: Puppet, | + | * Installer: Juju, Cobbler |
- | * QEMU/KVM, | + | * Configuration & Management: Juju, Puppet |
- | * Linux/Ubuntu distribution. | + | * Virtualization: QEMU/KVM |
+ | * Containers: Docker | ||
+ | * OS: Linux Ubuntu/Centos/Debian/Fedora distributions | ||
+ | |||
+ | ===== Key Project Facts ===== | ||
+ | |||
+ | ** Project Creation Date: ** \\ | ||
+ | ** Project Category: ** Integration & Testing\\ | ||
+ | ** Lifecycle State: ** \\ | ||
+ | ** Primary Contact: ** \\ | ||
+ | ** Project Lead: ** \\ | ||
+ | ** Jira Project Name: ** OPNFV System Configuration And Reporting \\ | ||
+ | ** Jira Project Prefix: ** OSCAR \\ | ||
+ | ** Mailing list tag ** [oscar] \\ | ||
====== Committers: ====== | ====== Committers: ====== | ||
Line 111: | Line 94: | ||
* Peter Lee ([[plee@clearpathnet.com]]), CLEARPATH NETWORKS | * Peter Lee ([[plee@clearpathnet.com]]), CLEARPATH NETWORKS | ||
* Konstantin Babenko ([[kbabenko@ngnware.com]]), ngnWare | * Konstantin Babenko ([[kbabenko@ngnware.com]]), ngnWare | ||
+ | * Prakash Ramchandran ([[prakash.ramchandran@huawei.com]]), Huawei | ||
+ | * Narinder Gupta ([[narinder.gupta@canonical.com]]), Canonical | ||
====== Contributors: ====== | ====== Contributors: ====== | ||
* Parantap Lahiri ([[plahiri@juniper.net]]), JUNIPER NETWORKS | * Parantap Lahiri ([[plahiri@juniper.net]]), JUNIPER NETWORKS | ||
Line 127: | Line 111: | ||
* Konstantin Babenko ([[kbabenko@ngnware.com]]), ngnWare | * Konstantin Babenko ([[kbabenko@ngnware.com]]), ngnWare | ||
* Nabeel Asim ([[nasim@ngnware.com]]), ngnWare | * Nabeel Asim ([[nasim@ngnware.com]]), ngnWare | ||
+ | * Ivan Zorrati ([[ivan.zorati@canonical.com]]), Canonical | ||
====== Planned Deliverables ====== | ====== Planned Deliverables ====== | ||
- | * OSCAR configuration server, | + | * OSCAR configuration server package |
- | * Scripts and templates for installing each stack component on bare metal (all-in-one, scalable/distributed) | + | * Installation procedure and scripts for OSCAR onto bare metal or virtual servers |
- | * Inventory and system configuration reports | + | * Configuration files for supported OPNFV stacks |
- | * Documentation describing how to configure OSCAR to support new solution components and VNFs. | + | * Images and packages for supported OPNFV stacks |
+ | * Inventory and system configuration reports for deployed OPNFV stacks | ||
+ | * Documentation describing how to support OPNFV stack lifecycle, and how to configure OSCAR to support new solution components and VNFs | ||
====== Proposed Release Schedule: ====== | ====== Proposed Release Schedule: ====== | ||
- | * The first release is targeted for March 2015. | + | * The first OSCAR release is targeted to align to OPNFV 2nd release. Interim downloads may be available ahead of formal release. |
* The project aligns with the current release cadence. | * The project aligns with the current release cadence. |