This is an old revision of the document!
Project “OSCAR” provides a solution to automatically install and configure the required components for OPNFV platform deployments, using existing installer and configuration tools and perform a set of basic system level tests. OSCAR is unique in the scope of OPNFV Release 1 projects as it focuses on the tools and process enabling service providers to install OPFNV based NFVI platforms, and validate the installations.
OSCAR will support the installation/configuration/test of OPNFV reference implementations based upon a variety of components, depending upon the project support of OPNFV members provided for component combinations in the project. These variations could for example leverage different SDN controllers or approaches to compute infrastructure (e.g. hypervisor vs container). In Release 1, at least one particular combination is proposed and supported by the project contributors so far, as shown in the table below. However this project will explicitly support other component combinations in Release 1 and beyond, given contributor support.
The OSCAR project will deliver the ability to install, configure, and test OPNFV reference implementations such as that proposed in the “Technology Proposed - Release 1” column in the table below. This proposal targets an installation on a virtual environment based on Ubuntu/Trusty as the base operating system and distribution. It includes a set of components that have already been tested and the overall functionality of the aggregate software components has been validated. As such, this is one possible OPNFV reference implementation which can serve as a framework to fast track the continuous integration of various components for the first release (targeted for March 2015).
OSCAR’s installation test capability will be used to validate basic installation and operation of a few example VNFs (listed below). Through these VNFs, OSCAR will enable validation of the integrity and functionality of OPNFV reference implementation installations.
OSCAR is intended to be closely aligned with the OPNFV TSC vision for the first release. By focusing on the installation, configuration, and validation of OPNFV reference implementations, OSCAR will help fast track the integration of various combinations of core open source network components, help OPNFV learn from their differences and commonality of deployment experience, and feed that back into producing a more flexible implementation framework.
For the reference implementation proposed in the “Technology Proposed - Release 1” column in the table below, the scope of the NFVI platform functions supported by the OSCAR project are illustrated in the attached figure (see slide 5) oscar_overview_v6.pptx . As the table indicates all the features and functionality are part of the first release of the OPNFV. Nearly all these features have been tested and integrated within OpenContrail, an open source platform under the Apache 2.0 license. Various POCs and pre-production version of the OpenContrail have been demonstrated in numerous lab and production systems.
Other reference implementation alternatives will also be supported under OSCAR, as demonstrating the ability to install/configure/test various OPNFV reference platforms is a key goal of OSCAR.
Related Projects
Technology Area Requirements | Sub-Area Definition | Technology Proposed - Release 1 | Release 2 Plans |
---|---|---|---|
Physical Infrastructure Design | Server-Network Connectivity | Reference simple Cluster Design with connectivity specification | Support multitrack Cluster Design |
Network Gateway | Network Gateway interface to L3VPN, L2VPN, EVPN and Internet | ||
Physical Infrastructure Configuration/Imaging | Server Imaging/Configuration | Server Manager: Cobbler based imaging; Puppet based configuration | |
Network Device Imaging/Configuration | Netconf based device configuration management | ODL based physical device config management | |
Virtual Infrastructure Orchestration | Compute Orchestration | OpenStack, Juno | Support Docker for OpenStack, evaluate other alternative orchestration mechanisms |
Storage Orchestration | Ceph based distributed block storage | Support one more block storage of community choice | |
Network Orchestration | OpenContrail SDN Controller | ||
Server OS | Ubuntu, Trusty | ||
Server Hypervisor | KVM - Qemu | ||
Virtual Network Device on Server | OpenContrail - vRouter(DPDK) | Support OVS forwarding in kernel | |
Support for Physical Appliance/Baremetal Server | VTEP termination on ToR switch using OVSDB | Standard based EVP+VXLAN support on ToR | |
Virtual Infrastructure Availability | Orchestration Controller Availability | High Availability with Active-N (N=3) mechanism | |
SDN Controller Availability | High Availability with Active-N (N=3) mechanism | ||
Service Orchestration | VNF Initiation | Heat Template based Virtual Network and Service Chain creation; Support TOSCA based workflow | |
VNF Configuration | Individual vendor EMS based VNF configuration; YANG model based VNF configuration from ODL | ||
Group Based Policy support | Instrument one use case to support group based policy | ||
Service Scaling | Horizontal Scaling of Service | API based horizontal scaling of services | |
Vertical Scaling of Service | On-demand resource augmentation of VNF | ||
Traffic Steering | Traffic steering through transparent Services | API based creation of transparent (bump-in-the-wire) Service Chain between two networks | |
Traffic steering through Services with L3 processing | API based creation of L3 processed Service Chain between two networks | ||
Traffic steering through multiple virtual services | API based creation of multiple virtualized services between two networks | ||
Traffic steering through virtual and physical services | API and Netconf based traffic steering through Virtualized and Physical Appliances | ||
User Interface | Creation of Service Chains | GUI or API based orchestration of Service Chains | |
Operation and Management of Cluster | GUI or API based Operation and Management of Cluster | ||
CLI Interface | Standard CLI based Operations | ||
Operational Support | Physical Server Infrastructure Monitoring | CPU, Mem, NIC, Environment, Events syslog monitoring | |
Physical Network Infrastructure Monitoring | Device Environment, Interface utilization, error rate monitoring | ||
Virtual Infrastructure Monitoring | vCPU, vMem, vNIC, Virtual Network Traffic monitoring | ||
Traffic flow Diagnostics | Endpoint reachability testing; Flow traceroute diagnostics | ||
Service Provisioning Interface | GUI or API based orchestration of Service Chains | ||
Diagnostic | Endpoint reachability testing | ||
Data Collection & Analytics | Log Collection | Service Logs & Syslogs | |
Flow Record | 1:1 flow record collection | ||
Packet Capture | API driven on-demand full packet capture of any flow | ||
Flow Path | Correlate overlay and underlay data to trace flow path |
OSCAR provides automated testing tools for installation of various components as well as health check for individual components. This includes (but not limited to):
Applicability for vCPE and mobility/wireline subscriber networks
Information regarding testing and integration including interoperability, scalability, high availability are provided in the above table. Any additional information for quality assurance and test resources will be available if necessary. The test cases include:
All API-related documents will be available in a timely manner. Detailed description of the Functional Architecture (building blocks, reference points, interfaces and protocols, work flow diagrams, etc.) will be provided during the development, integration and testing processes.
The OSCAR project relies on the following open source projects: