User Tools

Site Tools


requirements_projects:data_plane_acceleration

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
requirements_projects:data_plane_acceleration [2015/01/07 07:17]
Lingli Deng [Committers and Contributors:]
requirements_projects:data_plane_acceleration [2015/09/11 17:07] (current)
Ashlee Young [Committers and Contributors:]
Line 3: Line 3:
   * Proposed name for the project: ''​Data Plane Acceleration''​   * Proposed name for the project: ''​Data Plane Acceleration''​
   * Proposed name for the repository: ''​dpacc''​   * Proposed name for the repository: ''​dpacc''​
-  * Project Categories: ​ ''​(Requirements''​+  * Project Categories: ​ ''​Requirements''​
  
 ==== Project description:​ ==== ==== Project description:​ ====
-As a result of traffic ​convergence ​feature ​and the pervasive ​real-time ​high performance requirement for traditional ​data plane devices, combined ​with the inability ​to carry out computational-intensive tasks cost-efficiently by general CPU, various hardware acceleration ​solutions optimized for specific tasks are widely applied in traditional ​data plane devicesand is expected to continue to be a common practice ​in virtualized ​data plane devices (i.e. as VNFs).+As a result of convergence ​of various traffic types and increasing data rates, ​the performance requirements (both in terms of bandwidth and real-time-ness) on data plane devices ​within network infrastructure have been growing at significantly higher rates than in the past. As the traditional ‘bump-in-the-wire’ network functions evolve to a virtualized paradigm ​with NFV, the focus will be even higher ​to deliver high performance within very competitive ​cost envelopes. At the same time, application developers have, in some casestaken advantage of various hardware ​and software ​acceleration ​capabilities,​ many of which are platform supplier dependent. There is a clear impetus to move away from proprietary ​data plane interfaces, in favor of more standardized interfaces to leveraging the data plane capability of underlying platforms – whether using specialized hardware accelerators or general purpose CPUs
  
-  ​* Usecase1: Packet ​forwarding+The goal of this project is to specify a general framework for VNF data plane acceleration (or DPA for short), including a common suite of abstract APIs at various OPNFV interfaces, to enable VNF portability and resource management across various underlying integrated SOCs that may include hardware accelerators or standard high volume (or SHV) server platforms that may include attached hardware accelerators. It may be desirable, as a design choice in some cases,that such DPA API framework could easily fit underneath existing prevalent APIs (e.g. sockets) – mainly for legacy implementations even though they may not be most performance efficient. But this project should not seek to dictate what APIs an application must use, rather recognizing that API abstraction is likely a layered approach and developers can decide which layer to access directly, depending on the design choice for a given application usage.  
 + 
 +This project proposes to define such DPA API framework by considering a set of use cases that are most common and important for data plane devices, namely: 
 + 
 +  ​* Usecase1: Packet ​processing
   * Usecase2: Encryption   * Usecase2: Encryption
   * Usecase3: Transcoding   * Usecase3: Transcoding
  
-The ultimate goal of this project is to specify a common suite of data plane acceleration (or DPA for short) related APIs at various OPNFV interfacesto enable VNF portability across various underlying hardware accelerators or platforms.  +By utilizing such cross-usecasecross-platform and cross-accelerator general framework, it is expected that data plane VNFs can be easily migrated across available ​SHV server ​platforms and/or hardware accelerators per communication ​ service provider (or CSP)’s demand, while the CSPs could also change the platform or apply new hardware/​software ​accelerators with minimal impact to the VNFs.
- +
-By using these common APIs, it is expected that data plane VNFs can be easily migrated across available platforms and/or hardware accelerators per ISP’s demand, while the ISPs could also change the platform or apply new hardware accelerators with the VNFs intact.+
  
-As shown by the following figure, there are basically two alternatives ​to enable ​the use of a common ​APIs by data plane VNF. The functional abstraction layer framework and architecture should support both, and provide a unified interface to the upper VNF. It will be the person configuring the VNF, hypervisor/​host OS and hardware (policies) that decides which mode to use.+As shown in the following figure, there are basically two alternatives ​for realizing ​the data plane APIs for a VNF. The functional abstraction layer framework and architecture should support both, and provide a unified interface to the upper VNF. It will be the person configuring the VNF, hypervisor/​host OS and hardware (policies) that decides which model to use.
  
-{{ :​requirements_projects:​figure_1.png?nolink ​|}}+{{ :​requirements_projects:​dpacc-figure-1.png?direct ​|}}
  
-Note: for simplicity, ​the figures are drawn for hardware offloading accelerators (or HWAs) on the local hardware platform, but the scope of this project is by no means limited to local hardware acceleration +Note: As one can see the scope of the project includes ​hardware offloading accelerators (or HWAs) on the local hardware platform ​or general purpose SHV platforms.
- +
-The “pass-through” model where the VNF is making use of a common suite of acceleration APIs in discovering the hardware accelerators available and using the correct and specific “direct drivers” to directly access the allocated hardware resources. The features of this model include:+
  
 +In the “pass-through” model, the VNF is making use of a common suite of DPA APIs in discovering the hardware accelerators and/or generalized network interfaces (NWI) available and using the correct and specific “direct drivers” to directly access the allocated hardware resources. The features of this model include:
   - It enables the most efficient use of hardware resources by bypassing the hypervisor/​host OS, yielding higher performance than the other model.   - It enables the most efficient use of hardware resources by bypassing the hypervisor/​host OS, yielding higher performance than the other model.
   - It cannot provide “absolute transparency” to the VNFs using hardware accelerators,​ as they have to upgrade to make changes to their VM image each time they are making use to a new type of hardware accelerator,​ to load the specific driver and make it known to the application.   - It cannot provide “absolute transparency” to the VNFs using hardware accelerators,​ as they have to upgrade to make changes to their VM image each time they are making use to a new type of hardware accelerator,​ to load the specific driver and make it known to the application.
  
- +Alternatively,​ there is the “fully intermediated” model where the VNF talks to a group of abstracted functional “synthetic drivers”. These “synthetic drivers” relays the call to a backend driver in the hypervisor that actually interacts with specific driver ​for the underlying HWA and/or NWI. The features of this model include: 
-The “fully intermediated” model where the VNF talks to a group of abstracted functional “synthetic drivers”. These “synthetic drivers” relays the call to a backend driver in the hypervisor that actually interacts with specific ​HWA driver. The features of this model include: +  - Through this intermediate layer in the hypervisor, a registration mechanism is possible for a new HWA to make them mapped to the backend driver and then be used automatically ​with no changes ​to the upper VNFs. 
-  - Through this intermediate layer in the hypervisor, a registration mechanism is possible for a new HWA to make them mapped to the backend driver and then be used automatically ​without any change ​to its upper VNFs. +
   - Access control and/or resource scheduling mechanisms for HWA allocation to different VNFs can also be included in the hypervisor to enable flexible policies for operation considerations.   - Access control and/or resource scheduling mechanisms for HWA allocation to different VNFs can also be included in the hypervisor to enable flexible policies for operation considerations.
  
Line 37: Line 37:
   * Problem Statement ​   * Problem Statement ​
  
-As stated earlier, ​despite of the fact that hardware ​assisted data plane acceleration ​is expected ​to be a common practice in production data plane VNFs, no common APIs exists for VNFs to use for accessing these specialized ​hardware ​accelerators+As stated earlier, ​there is an existing problem for application developers who do use various ​hardware ​and software ​acceleration ​mechanisms that are supplier platform specific, or they would like to be able to leverage such acceleration but are reluctant to have the resultant migration issues when porting software to other platforms. Generally there are varied interfaces to underlying hardware and  there is need to establish a consistent, high performance data plane API that would facilitate development of production data plane VNFs that can make the best use of underlying ​hardware ​resources while maintaining portability across platforms.
  
-As a result, the VNF developers have to rewrite their code to do hardware migration, which leaves them reluctant to support new acceleration technologies available, while ISPs have to suffer from the undesirable binding between VNF software with the underlying platform and/or hardware accelerator in use.+To this end, the proposed project is intended ​to
  
-By specifying a common suite of hardware-independent APIs for data plane VNFs, the SW implementation can be fully decoupled from the HW architecture,​ fulfilling the OPNFV’s vision towards an open layered architecture. +Phase 1: (by 2015Q2)
-To this end, the proposed project is intended to (tentatively scheduled)+
  
-      - document typical VNF use-cases and high-level functional abstraction for hardware acceleration;​ +      - document typical VNF use-cases and high-level ​requirements for the generic ​functional abstraction for high performance data plane and acceleration functions, including ​hardware ​and software ​acceleration; ​and 
-      - identify the potential extensions across various NFV interfaces ​to enable hardware-independent data plane acceleration;​ +      - identify the potential extensions across various NFV interfaces ​and evaluate current state-of-art solutions from open-source upstream projects according to identified ​requirements and targeted ​framework.
-      ​identify the high-level requirements ​for an efficient ​and extensible ​framework ​for implementing the specified APIs; +
-      - specify detailed API design and test cases; +
-      - provide open source implementation for both the framework and test tools; and +
-      - coordinate integrated testing and release testing results.+
  
-  * Interface/API specification ​+Phase 2: (by 2015Q4) 
 +      - specify detailed framework/API design/​choice and document test cases for selected use-cases;​ 
 +      - provide open source implementation for both the framework and test tools; 
 +      - coordinate integrated testing and release testing results; and 
 +      - interface ​specification.
  
-Using the small cell GW VNF as an example, where the VNF is composed of a signaling GW (SmGW) VM and a security GW (SeGW) VM. In this example, SmGW VM is using hardware acceleration technology for high performance packet forwarding, while SeGW VW is using IPSec offloading in addition. ​ The following figure highlights the potential extensions to OPNFV interfaces that might be needed to enable hardware-independent data plane VNFs.+  * Interface specification ​
  
-{{ :​requirements_projects:​figure_2.png?nolink ​|}}+Using the small cell GW VNF as an example, where the VNF is composed of a signaling GW (SmGW) VM and a security GW (SeGW) VM. In this example, SmGW VM is using hardware acceleration technology for high performance packet processing, while SeGW VW is using IPSec offloading in addition. The following figure highlights the potential extensions to OPNFV interfaces that might be needed to enable hardware-independent data plane VNFs. 
 + 
 +{{ :​requirements_projects:​dpacc-figure-2.png?direct ​|}}
  
   * What is in or out of scope    * What is in or out of scope 
  
-      - Data plane acceleration use-cases other than packet-forwarding, encryption or transcoding are currently out of scope, which could be included in later phases.+      - Data plane acceleration use-cases other than packet-processing, encryption or transcoding are currently out of scope, which could be included in later phases.
       - Management plane APIs are currently out of scope, for the sake of quick application,​ and could be included in later phases.       - Management plane APIs are currently out of scope, for the sake of quick application,​ and could be included in later phases.
  
   * The project can be extended in the following aspects in future   * The project can be extended in the following aspects in future
  
-      - To include more use-cases and to extend detailed API design and test cases; +      - To include more use-cases;​ 
-      - To include management plane APIs in to the scope;+      - To include management plane interfaces;
       - To coordinate integrated testing and release testing results.       - To coordinate integrated testing and release testing results.
  
 ==== Dependencies:​ ==== ==== Dependencies:​ ====
  
-  * Identify ​similar ​projects ​is underway or being proposed in OPNFV or upstream project +  * There is no similar ​proposal that is underway or being proposed in OPNFV or upstream project
-  * Identify any open source ​upstream projects and release timeline + 
-  * Identify any specific development ​be staged with respect to the upstream project ​and releases+  * Related ​upstream projects
-  * Are there any external fora or standard development organization dependenciesIf possiblelist and informative ​and normative reference specifications+ 
-  * If project ​is an integration ​and testidentify ​hardware ​dependency.+      - OpenDataPlane (ODP) provides a programming abstraction for networking System on Chip (SoC) devices, including abstractions for packet processing, timers, buffers, events, queues and other hardware ​and software constructsODP is an API abstraction layer which hides the differences between hardware and software implementations,​ both of which are supported, underneath, which can be unique for each SoC as the author sees fit, and provides unified APIs for some specific functions. ODP is ISA agnostic and has been ported to ARM, MIPS, PowerPC and x86 architectures.  
 +      - DPDK is a similar project, which provides a set of libraries and drivers for faster packet processing on x86 architecture and has been ported to Open Power, ARM and even MIPS. 
 +      - OpenCL is a framework for writing programs that execute across heterogeneous platforms consisting of CPUsGPUs, DSPs, FPGAs and other processors. OpenCL enabled a rich range of algorithms ​and programming patterns to be easily accelerated
 +      - libvirt ​is a toolkit to interact with the virtualization capabilities of recent versions of Linux (and other OSes). Its goal is to provide a common and stable layer sufficient to securely manage domains on a single phsical machinewhere a domain is an instance of an operating system (or subsystem in the case of container virtualization) running on a virtualized machine provided by the hypervisor. 
 +      - Virtio is a defacto ​hardware ​transparent interface to expose storage and network devices to virtual machines, which is being standardized in OASIS ‘virtio’ group. Virtio framework is one of the candidates to create vendor independent drivers for look-aside accelerators. 
 +      - Openstack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter. It is considered a candidate to implement Virtualized Infrastructure Manager (VIM) for OPNFV platform.
  
 ==== Committers and Contributors:​ ==== ==== Committers and Contributors:​ ====
Line 82: Line 88:
   * Lingli Deng (China Mobile, denglingli@chinamobile.com)   * Lingli Deng (China Mobile, denglingli@chinamobile.com)
   * Bob Monkman (ARM, Bob.Monkman@arm.com)   * Bob Monkman (ARM, Bob.Monkman@arm.com)
-  * Peter Willis (British Telecom, peter.j.willis@bt.com) 
   * Kin-Yip Liu (Cavium, Kin-Yip.Liu@caviumnetworks.com)   * Kin-Yip Liu (Cavium, Kin-Yip.Liu@caviumnetworks.com)
-  * Fahd Abidi (EZCHIP, fabidi@ezchip.com) 
-  * Arashmid Akhavain (Huawei, arashmid.akhavain@huawei.com) 
   * Xinyu Hu (Huawei, huxinyu@huawei.com)   * Xinyu Hu (Huawei, huxinyu@huawei.com)
   * Vincent Jardin (6WIND, vincent.jardin@6wind.com)   * Vincent Jardin (6WIND, vincent.jardin@6wind.com)
-  * François-Frédéric Ozog (6WIND, ff.ozog@6wind.com) 
-  * Mike Young (myoung@wildernessvoice.com) 
   * Wenjing Chu (DELL, Wenjing_Chu@DELL.com)   * Wenjing Chu (DELL, Wenjing_Chu@DELL.com)
   * Saikrishna M Kotha (Xilinx, saikrishna.kotha@xilinx.com)   * Saikrishna M Kotha (Xilinx, saikrishna.kotha@xilinx.com)
   * Bin Hu (AT&T, bh526r@att.com)   * Bin Hu (AT&T, bh526r@att.com)
 +  * Subhashini Venkataraman (Freescale, subhaav@freescale.com)
 +  * Leon Wang (Altera, ALEWANG@altera.com)
 +  * Keith Wiles (Intel, Keith.wiles@intel.com) ​
 +  * Xiaowei Ji (ZTE, ji.xiaowei@zte.com.cn)
  
 +Names and affiliations of the contributors:​
  
-==== Planned deliverables ====+  * Peter Willis (British Telecom, peter.j.willis@bt.com) 
 +  * Fahd Abidi (EZCHIP, fabidi@ezchip.com) 
 +  * Deepak Unnikrishnan (deepak.cu@gmail.com) 
 +  * Julien Zhang (ZTE, zhang.jun3g@zte.com.cn) 
 +  * Srini Addepalli (Freescale, saddepalli@freescale.com) 
 +  * François-Frédéric Ozog (6WIND, ff.ozog@6wind.com) 
 +  * Tapio Tallgren (Nokia, tapio.tallgren@nsn.com) 
 +  * Mikko Ruotsalainen (Nokia, mikko.ruotsalainen@nsn.com) 
 +  * Zhipeng Huang (Huawei, huangzhipeng@huawei.com) 
 +  * Argy Krikelis (Altera, AKRIKELI@altera.com) 
 +  * Venky Venkatesan (Intel, Venky.Venkatesan@intel.com) 
 +  * Alex Mui (ASTRI, alexmui@astri.org) 
 +  * Jesse Ai (ASTRI, jesseai@astri.org) 
 +  * Arashmid Akhavain (Huawei, arashmid.akhavain@huawei.com) 
 +  * Parviz Yegani (Juniper, pyegani@juniper.net) 
 +  * Mario Cho (hephaex@gmail.com) 
 +  * Hongyue Sun (ZTE, sun.hongyue@zte.com.cn) 
 +  * Haishu Zheng (ZTE, zheng.haishu@zte.com.cn)
  
-  ​* Use-cases, requirements and gap anlaysis +  ​
-  * Functional block description for the general architecture/​framework +
-  * API design and test guidelines+
  
  
-==== Proposed Release Schedule: ==== 
  
-  ​When is the first release planned? ​ +==== Planned deliverables ==== 
-Q2 2015 (tentatively)+ 
 +  ​Phase 1: (by 2015Q2) Use-cases, requirements and gap anlaysis 
 +  * Phase 2: (by 2015Q4General framework specification,​ running code and testing report 
 + 
 +==== Proposed Release Schedule: ====
  
-  * Will this align with the current ​release ​cadence? ​ +  * The first release ​is scheduled by 2015Q2 (tentatively). 
-Not included in the first release.+  ​* ​Not planned to be included in the first release ​of OPNFV.
requirements_projects/data_plane_acceleration.1420615055.txt.gz · Last modified: 2015/01/07 07:17 by Lingli Deng