This is an old revision of the document!
Data Plane Acceleration
dpacc
(Requirements)
As a result of traffic convergence feature and the pervasive real-time high performance requirement for traditional data plane devices, combined with the inability to carry out computational-intensive tasks cost-efficiently by general CPU, various hardware acceleration solutions optimized for specific tasks are widely applied in traditional data plane devices, and is expected to continue to be a common practice in virtualized data plane devices (i.e. as VNFs).
The ultimate goal of this project is to specify a common suite of data plane acceleration (or DPA for short) related APIs at various OPNFV interfaces, to enable VNF portability across various underlying hardware accelerators or platforms.
By using these common APIs, it is expected that data plane VNFs can be easily migrated across available platforms and/or hardware accelerators per ISP’s demand, while the ISPs could also change the platform or apply new hardware accelerators with the VNFs intact.
As shown by the following figure, there are basically two alternatives to enable the use of a common APIs by a data plane VNF. The functional abstraction layer framework and architecture should support both, and provide a unified interface to the upper VNF. It will be the person configuring the VNF, hypervisor/host OS and hardware (policies) that decides which mode to use.
Note: for simplicity, the figures are drawn for hardware offloading accelerators (or HWAs) on the local hardware platform, but the scope of this project is by no means limited to local hardware acceleration.
The “pass-through” model where the VNF is making use of a common suite of acceleration APIs in discovering the hardware accelerators available and using the correct and specific “direct drivers” to directly access the allocated hardware resources. The features of this model include:
The “fully intermediated” model where the VNF talks to a group of abstracted functional “synthetic drivers”. These “synthetic drivers” relays the call to a backend driver in the hypervisor that actually interacts with specific HWA driver. The features of this model include:
As stated earlier, despite of the fact that hardware assisted data plane acceleration is expected to be a common practice in production data plane VNFs, no common APIs exists for VNFs to use for accessing these specialized hardware accelerators.
As a result, the VNF developers have to rewrite their code to do hardware migration, which leaves them reluctant to support new acceleration technologies available, while ISPs have to suffer from the undesirable binding between VNF software with the underlying platform and/or hardware accelerator in use.
By specifying a common suite of hardware-independent APIs for data plane VNFs, the SW implementation can be fully decoupled from the HW architecture, fulfilling the OPNFV’s vision towards an open layered architecture. To this end, the proposed project is intended to (tentatively scheduled)
Using the small cell GW VNF as an example, where the VNF is composed of a signaling GW (SmGW) VM and a security GW (SeGW) VM. In this example, SmGW VM is using hardware acceleration technology for high performance packet forwarding, while SeGW VW is using IPSec offloading in addition. The following figure highlights the potential extensions to OPNFV interfaces that might be needed to enable hardware-independent data plane VNFs.
Names and affiliations of the committers:
Q2 2015 (tentatively)
Not included in the first release.