User Tools

Site Tools


nfv_hypervisors-kvm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
nfv_hypervisors-kvm [2015/06/29 00:03]
Jun Nakajima [Scope]
nfv_hypervisors-kvm [2015/06/30 18:03] (current)
Prakash Ramchandran [Committers and Contributors]
Line 10: Line 10:
   * Proposed name for the repository: ''​kvmfornfv''​   * Proposed name for the repository: ''​kvmfornfv''​
   * Project Categories: ​   * Project Categories: ​
-    * Requirements 
     * Collaborative Development     * Collaborative Development
  
Line 84: Line 83:
 Data plane VNFs typically would need to use the option 1, as pointed above. The DPDK IVSHMEM library in DPDK,  for example, uses shared memory (called “ivshmem”) across the VMs (See http://​dpdk.org/​doc/​guides/​prog_guide/​ivshmem_lib.html). This is one of the optimal implementations available in KVM, but the ivshmem feature is not necessarily well received or maintained by the KVM/QEMU community. In addition, “security implications need to be carefully evaluated” as pointed out there. ​ Data plane VNFs typically would need to use the option 1, as pointed above. The DPDK IVSHMEM library in DPDK,  for example, uses shared memory (called “ivshmem”) across the VMs (See http://​dpdk.org/​doc/​guides/​prog_guide/​ivshmem_lib.html). This is one of the optimal implementations available in KVM, but the ivshmem feature is not necessarily well received or maintained by the KVM/QEMU community. In addition, “security implications need to be carefully evaluated” as pointed out there. ​
  
-For the option 2, it is possible for the VMs, vSwitch, or the KVM hypervisor to lower overhead and latency using software (e.g. shared memory) or hardware virtualization features. Some of the techniques developed for such purposes are useful for the option 1 as well. For example, the virtio Poll Mode Driver (PMD) (http://​dpdk.org/​doc/​guides/​nics/​virtio.html) and the vhost library (such as vhost-user) in DPDK can be helpful when providing fast inter-VM communication and VM-host communication. In addition, hardware virtualization features, such as VMFUNC could be helpful when providing protected ​inter-VM communication by mitigating security issues with ivshmem (See the KVM Forum 2014 pdf below for details).+For the option 2, it is possible for the VMs, vSwitch, or the KVM hypervisor to lower overhead and latency using software (e.g. shared memory) or hardware virtualization features. Some of the techniques developed for such purposes are useful for the option 1 as well. For example, the virtio Poll Mode Driver (PMD) (http://​dpdk.org/​doc/​guides/​nics/​virtio.html) and the vhost library (such as vhost-user) in DPDK can be helpful when providing fast inter-VM communication and VM-host communication. In addition, hardware virtualization features, such as VMFUNC could be helpful when protecting ​inter-VM communication by mitigating security issues with ivshmem (See the KVM Forum 2014 pdf below for details).
  
 For this feature, therefore, we need to take the following steps: For this feature, therefore, we need to take the following steps:
Line 106: Line 105:
 http://​www.intel.com/​content/​dam/​www/​public/​us/​en/​documents/​white-papers/​page-modification-logging-vmm-white-paper.pdf,​ for example). http://​www.intel.com/​content/​dam/​www/​public/​us/​en/​documents/​white-papers/​page-modification-logging-vmm-white-paper.pdf,​ for example).
  
-In general, live migration is not necessarily guaranteed to be carried out completely, depending on the workload of the VM and bandwidth of the network used to transfer the on-going VM state changes to the destination. Upon such failures, the VM just stays at the source node. To increase success probability,​ one of the effective ways is to choose the time period when the workload is known to be low. This can be achieved by the orchestration or management level in an automated fashion, and it is outside scope of this project.+In general, live migration is not necessarily guaranteed to be carried out, depending on the workload of the VM and bandwidth of the network used to transfer the on-going VM-state changes to the destination. Upon such failures, the VM just continue to stay at the source node. To increase success probability,​ one of the effective ways is to choose the time period when the workload is known to be low. This can be achieved by the orchestration or management level in an automated fashion, and it is outside scope of this project.
  
-We see more complex types of live migration beyond the above areai.e. live migration of independent and single VM. For example, memory of VMs can be accessed directly by a vSwitch for packet transferring. In this case, the vSwitch needs to be notified for live migration. Also, the vSwitch on the destination machine needs to include the VM. We will discuss whether this kind of live migration is required at the requirements gathering stage, and then decide how we support it if any.+We see more complex types of live migration beyond the above area (i.e. live migration of independent and single VM). For example, memory of VMs can be accessed directly by a vSwitch for packet transferring. In this case, the vSwitch needs to be notified for live migration. Also, the vSwitch on the destination machine needs to include the VM. We will discuss whether this kind of live migration is required at the requirements gathering stage, and then decide how we support it if any.
  
 The other significant limitation with the current live migration is lack of support for SR-IOV. This is mainly due to missing H/W features with IOMMU and devices that are required to achieve live migration. Once we have line of sight for software workarounds,​ we will include SR-IOV support to this subproject. The other significant limitation with the current live migration is lack of support for SR-IOV. This is mainly due to missing H/W features with IOMMU and devices that are required to achieve live migration. Once we have line of sight for software workarounds,​ we will include SR-IOV support to this subproject.
Line 155: Line 154:
   * Manish Jaggi: Manish.Jaggi@caviumnetworks.com   * Manish Jaggi: Manish.Jaggi@caviumnetworks.com
   * Kin-Yip Liu: kin-yip.liu@caviumnetworks.com   * Kin-Yip Liu: kin-yip.liu@caviumnetworks.com
 +  * Prakash Ramchandran:​ prakash.ramchandran@huawei.com
  
  
nfv_hypervisors-kvm.1435536209.txt.gz · Last modified: 2015/06/29 00:03 by Jun Nakajima