User Tools

Site Tools


Keith's presentation on data path framework

* Argy:Is it possible to share accelerator with VM3?

  • Yes. First of all, VFs enabled by SR-IOV allows you to share a piece of device with best performance. Secondly, if you ignor the SAL in VM, and focus on the SAL in host user space, that SAL could do orchestration to stop VMs from stepping onto each other, but with degraded performance. After all, with support of SR-IOV, it is going to be difficult. Every time you use a lock, a spin-lock, you are degrading your system by 15-20%.

* Argy: You'are not proposing that transport layer be SR-IOV only, right? I assume it could be anything, even the shared memory, right?

  • Yes. We are only specifying the interface, and the transport could be anything, shared memory, SR-IOV, etc.

* Saikirshana: Is there assumption on supporting metadata of acceleration on this interface?

  • Right now, we don't have it, because virtio is only doing networking. Extenstions on virtio need to be further defined in alignment with dpacc mgmt plane.

* Saikirshana: The metadata could be anything. It could be the packet data or metadat without packet data. (not fully comprehended???)

  • I thing I agree with you, but I don't fully follow you yet.

* Srini: How could VM#3 have its packet not going through vSwitch?

  • Via the SAL in the VM using SR-IOV like transport to talk directy with the device without going through vSwitch.

* Srini: I am wondering if you can find some usecases that packets don't go to vSwitch and goes directly to the HW?

  • crypto,etc.

* Srini: I assume your slides are about networking?

  • no, it could be networking, and it could be crypto.But it could be a problem if networking packets bypassing vSwitch and accumulated from different paths. So, let us assume that the packets bypassing vSwitch is not networking. And utilizing a single VF by user space vSwitch does provide enhanced scalability than VM talking directly with the device.

* Argy: Why do you think it does not scale well?

  • For the fact that, with SR-IOV, a device typically have a limited number of virtual functions. In some cases, these device has problems if you are achieving too many virtual functions with a single piece of device. You can ran into a scale problem with this. I mean, the device is still limited by the network traffic. Say you device only support 8 VFs, but you would like to have 16 VMs to access that device. You could use vSwitch and have the SAL underneath it to have a single access point to that device. Does that make sense?
  • Yes. When you say the limitation is on the device capability, I agree. But SR-IOV mechanism itself does not have a limitation on scalability over VF number.

Keith: One can always bypass the vSwitch in the host, and rely on the local/external device to do the VM2VM swtiching. But since not all the local device support vSwitching offload, that could add extra work load to the external world, e.g. a ToR.

* Argy: Is it possible for the SAL in the VM to allow dynamic path change for different packets? For instance, take IPsec for example, key mgmt may not need to go through vSwitch like bulk data for encryption/decryption does.

  • Yes, the SAL in the VM can do that. If you don't have a SAL in the VM, the SAL in the host can also do that.
  • My concern is that the VM should have the capability to choose the path to go out.
  • Yes, that is doable. But in case there is no support for VFs from the device, you may get stuck.

Keith: summarizes the proposals into:

  • enhancement to virtio, and use it as the general interface between guest and host
  • vSwitch sitting on SAL, as another usecase for acceleration, which is optional from VNF's perspective.

* Argy: Virtio has performance issues.

  • Yes, there is concern about virtio performance, although 6WIND has stats to show that high performance with virtio is also doable. But I think, as long as you are not getting into the kernerl space, you would be much safer and faster, for not interrupting the kernel doing other stuff while you are trying to get a large volume of traffic going out of the system. And multi-core systems, by allocating cores to different functionalities may also help in solving the performance issue.

* Lingli: What about the mgmt flows contained in HW's AAL framework, setting up and local mgmt of the data paths? Are they contained within your proposal? How would you realize the mgmt flows in your architecture?

  • I agree that we should have mgmt functionalities that AAL defines. But rather than creating a new entity, I believe we should extend virtio and include those AAL proposes. The reason is that I believe it would be easier to upstream and for people to accept it. Virtio is currently only Vring-based and transporting pakcet data, it can transport any data and tag the data as metadata from a mgmt entity.

Pending email confirmation from HW to settle the virtio interface and extension proposal. Once agreement reached, then move on more details regarding the packet flow, and mgmt flows.

* Srini: We have actually implement IPsec acceleration based on virtio, and would like to share it later.

Howard's presentation on openstack work plan

2 modifications made to the slides before the discussion:

  • add "develop catalogue and inventory of accelerator resources to ensure alignment with NSD/VNFD definition" into openstack work proposal per request from list discussion;

(Note: the actual definition is out of our scope, but the alignment is key.)

  • Add new slides for two fronts work plan, including:
    • OpenStack L Release: Nova enhancements on scheduler and etc to reflect a simple awareness of the acceleration pool.
    • OpenStack M Release: Propose a top level project as an individual acceleration management project, which could serve as the direct upstream project for DPACC to fulfill any mgmt related requirements.

Argy: With the top level project, do you still need enhancement in Nova?

  • The proposed Rocket will take over the life cycle managment of accleration resource pool, with less enhancement in Nova. But in case that Rocket idea not well accepted by the community, we can still work on Nova and other projects.

Srini: Looks good. Maybe need more clarifications on the interaction between different components with the work flows.

Keith: We need to define what information we would need to give to and get from Nova to be able to set up and configure the acceleration in the node itself. The API design would be very much dependent on the information type it needs to transfer.

Fahd: (comments not comprehended due to poor connection)

Next step suggestions:

  • Combine proposed Rocket architecture with Srini's work flow contribution.
  • Consider use Ipsec in AAL as an example to explore the information exchanged via various interfaces.

Francois's presentation on ETSI decriptor for vRouter-based IPSec VPN

Classification on accelerators: VNF accelerator and NFVI accelerator. NFVi accelerator is actually implemented by the vRouter in the host doing L2-L4 switching instead of vSwitch, which is configured through NSD/VNFD/VNFFGD.

Using small cell GW as an example, Francois shows that the IPsec tunneling funcationality between small cells and the sigGW, which is originally handled by a dedicated VNF SeGW, can be described as an IPSec connection point and realized by a vRouter sitting on SAL (e.g. using openflow-based work flows prposed by Srini to configure the underlying accelerators).

Srini: Which is the corresponding Openstack entity/item to the VNFD? Is that the VNF metadata?

  • Yes. There is openstack neutron supports VPNaaS.

When doing NS initiation, for step 7, VIM instantiates the connectivity network needed for the Network Service, where the vRouter could be called to to the proper configuration for the IPsec accelerators (look-aside or fully-offload).

Keith: vRouter and vSwitch are similar in the general framework.

FF points out Linux stack does not support full IPsec offloading, therefore vRouter-based solution is more simple than to modify the stack to utilize that. It is being considered at ETSI NFV to have the vRouter to be configured through Yang or Netconf models, and translated to Nutreon. But the descriptors are not fully standardized yet.

Lingli: There is but one piece of information contained in the current descriptor for an IPSec link. How could VIM infer from that information to comprehend the acceleration resources that might need to be allocated to vRouter to enable such link?

  • It is an assumption from ETSI NFV that VIM would understand. Right now, all the descriptors are defined as human-readable information. How VIM understand and react is not considered.

Howard: For implementation, we could implement those descriptors as HEAT templates.

Lingli: Do the descriptors as defined in ETSI NFV phase 1 enough for VNF accelerators?

  • Yes for SeGW. However, it is not enough for dpacc. Because dpacc has more detailed information that ETSI has not yet worked on. But as Howard said, we can make an assumption, and start to make something simple.

Discussion on virtio extension for data path framework

Ferran: Question for Keith. It is proposed to add crypto API to virtio, right?

Keith: Yes, I want to extend virtio.

Ferren: I agree that virtio needs to be extended. But adding each type of accelerator's API would make it too complext.

Keith: I don't see that. Currently virio only does networking. For crypto or compression, they will have different sets of APIs, in addition to current APIs.

Ferran: I suggest another higher layer can be added on top of virtio to better address the issue.

Keith: Yes, we can. But extending virtio with more functionality could be an easier option to push it out. If we create a new piece, we still have to integrated it with virtio. Or we will have to create a new path out of VM to the host, similar to virtio is already doing. So why not extending virtio to do that as well? Maybe vritio already has a higher layer entity that can be extended. We need to have people further look into it.

dpacc/meeting_minutes_0410.txt · Last modified: 2015/04/10 07:15 by Lingli Deng