User Tools

Site Tools


project_proposals:domino

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
project_proposals:domino [2016/03/09 09:39]
Ulas Kozat [Project description:]
project_proposals:domino [2016/03/22 14:26] (current)
Prakash Ramchandran [Committers :]
Line 1: Line 1:
 ==== Project Name: ==== ==== Project Name: ====
  
-  * Proposed name for the project: ​Domino ​Template Distribution Service+  * Proposed name for the project: Template Distribution Service ​(Domino)
   * Proposed name for the repository: Domino   * Proposed name for the repository: Domino
 ==== Project description:​ ==== ==== Project description:​ ====
Line 30: Line 30:
 OPNFV architecture relies heavily on northbound APIs (NBIs) for resource management and orchestration. Keeping NBI simple and generic increases the usability and potentially prevents issues with backward compatibility. A simple API call to a particular VIM such as SDN controller or OpenStack controller can be interpreted in various ways if multiple options are available. ​ OPNFV architecture relies heavily on northbound APIs (NBIs) for resource management and orchestration. Keeping NBI simple and generic increases the usability and potentially prevents issues with backward compatibility. A simple API call to a particular VIM such as SDN controller or OpenStack controller can be interpreted in various ways if multiple options are available. ​
  
-In Figure 3, an example for VNF scaling is provided. VNFM can simply make a call such as VNF.scale(scaling.factor),​ where the input parameter scaling.factor ​ determines the factor of capacity increase. If for instance scaling.factor ​¬is specified as two, then VIM is supposed to double the capacity after receiving the API call. The two options shown in the figure are both meaningful choices in the absence of additional clues about the service. In the first option, VIM changes the configuration of the VM by doubling its CPU, memory, and disk resources (i.e., it scales up the VM instance). This may be done by first pausing the VM, taking a snapshot, and then rebooting the snapshot image with a larger hardware configuration. This clearly interrupts the existing workloads and sessions on the scaled VNF. If the VIM had the capability of doubling the resources without any service interruption (e.g., it can add virtual hardware resources without requiring any image snapshot and reboot), scaling up a given VNF would have a different consequence on the actual services that utilize this particular VNF. Thus, even though the VNF is simply moved to a larger instance size, the service implications are quite different based on VIM capabilities. In the second option, VNF capacity is doubled by simply launching another VM with the same size (i.e., VNF is scaled out). For stateless network functions, this is relatively a straight forward and easy way of doubling the service capacity as long as the VMs do not have a shared bottleneck. If VNF has stateful implementation,​ state synchronization between the VMs to handle the existing sessions might be a major issue. The main point of this discussion is that without knowing the context of service and behavior/​performance modeling of these VNFs going from one configuration to the other, an arbitrary decision by VIM can lead to poor performance. This undesired outcome could have been completely avoided if enough conditions and rules (i.e., prescriptions) were passed onto the VIM from VNFM.       +{{:​project_proposals:​vnf_scaling.png?​300|}} 
 + 
 +Figure 3: VNF Scaling Options 
 + 
 +In Figure 3, an example for VNF scaling is provided. VNFM can simply make a call such as VNF.scale(scaling.factor),​ where the input parameter ​//scaling.factor ​// determines the factor of capacity increase. If for instance ​//scaling.factor// is specified as two, then VIM is supposed to double the capacity after receiving the API call. The two options shown in the figure are both meaningful choices in the absence of additional clues about the service. In the first option, VIM changes the configuration of the VM by doubling its CPU, memory, and disk resources (i.e., it scales up the VM instance). This may be done by first pausing the VM, taking a snapshot, and then rebooting the snapshot image with a larger hardware configuration. This clearly interrupts the existing workloads and sessions on the scaled VNF. If the VIM had the capability of doubling the resources without any service interruption (e.g., it can add virtual hardware resources without requiring any image snapshot and reboot), scaling up a given VNF would have a different consequence on the actual services that utilize this particular VNF. Thus, even though the VNF is simply moved to a larger instance size, the service implications are quite different based on VIM capabilities. In the second option, VNF capacity is doubled by simply launching another VM with the same size (i.e., VNF is scaled out). For stateless network functions, this is relatively a straight forward and easy way of doubling the service capacity as long as the VMs do not have a shared bottleneck. If VNF has stateful implementation,​ state synchronization between the VMs to handle the existing sessions might be a major issue. The main point of this discussion is that without knowing the context of service and behavior/​performance modeling of these VNFs going from one configuration to the other, an arbitrary decision by VIM can lead to poor performance. This undesired outcome could have been completely avoided if enough conditions and rules (i.e., prescriptions) were passed onto the VIM from VNFM.       
  
 Even for single VNF scaling, as the example shows, templating NBI calls is useful. In reality, service scenarios can be significantly more complex. For instance, VNF definition can be quite complex and can consist of many functions (e.g., vIMS, vEPC). Another popular example is services that are composed of VNF graphs or service chains. Managing the lifecycle and performance of such VNFs or services often require a series of well-orchestrated low level API calls and state maintenance. Thus, capturing all this orchestration only with a high level intent API in many use cases is not an option. Instead the consumer of an API should be able to describe the requested workflow in a resource orchestration template and provide this template to the producer of the API before actually utilizing the API. Even for single VNF scaling, as the example shows, templating NBI calls is useful. In reality, service scenarios can be significantly more complex. For instance, VNF definition can be quite complex and can consist of many functions (e.g., vIMS, vEPC). Another popular example is services that are composed of VNF graphs or service chains. Managing the lifecycle and performance of such VNFs or services often require a series of well-orchestrated low level API calls and state maintenance. Thus, capturing all this orchestration only with a high level intent API in many use cases is not an option. Instead the consumer of an API should be able to describe the requested workflow in a resource orchestration template and provide this template to the producer of the API before actually utilizing the API.
Line 71: Line 75:
 ==== Documentation:​ ==== ==== Documentation:​ ====
  
 +  * Presentations:​
 +    * {{:​project_proposals:​opnfv_technical_discussion_-_domino_proposal.pptx|}}
   * API Docs (TBD)   * API Docs (TBD)
   * Functional block description (TBD)   * Functional block description (TBD)
Line 78: Line 84:
   * Project has dependencies on Parser, OpenStack Heat, ONOSFW. Project has also dependency on SFC solutions (e.g., OpenStack neutron extensions, ODL SFC, ONOS SFC, etc.) at the API level.   * Project has dependencies on Parser, OpenStack Heat, ONOSFW. Project has also dependency on SFC solutions (e.g., OpenStack neutron extensions, ODL SFC, ONOS SFC, etc.) at the API level.
   * Although it does not have any particular dependency on OpenStack Tacker, the project has overlaps in the context of multi-site resource orchestration and parsing/​translating/​mapping TOSCA templates. Project has also potential overlaps with multi-site project.   * Although it does not have any particular dependency on OpenStack Tacker, the project has overlaps in the context of multi-site resource orchestration and parsing/​translating/​mapping TOSCA templates. Project has also potential overlaps with multi-site project.
-  * Open-O and Tacker are the main upstreams ​+  * Open-O and Tacker are the main upstreams 
 + 
 +(See [[:​project_proposals/​domino/​dependencies|Dependency Analysis]] for more detailed analysis) 
  
-==== Committers ​and Contributors: ====+==== Committers : ====
  
   * Ulas Kozat (ulas.kozat@huawei.com)   * Ulas Kozat (ulas.kozat@huawei.com)
   * Prakash Ramchandran (prakash.ramchandran@huawei.com)   * Prakash Ramchandran (prakash.ramchandran@huawei.com)
 +  * someone@hp.com (Vinayak Ram of Parser project to assign HP committer)
  
 +==== Contributors : ====
  
 +  * Artur Tyloch (artur.tyloc@canonical.com)
 ==== Planned deliverables:​ ==== ==== Planned deliverables:​ ====
  
Line 95: Line 107:
     * OPNFV requirements:​ lifecycle management artifacts for VNF and service scaling     * OPNFV requirements:​ lifecycle management artifacts for VNF and service scaling
   * Dependencies   * Dependencies
-    * Parser project: policy2tosca module ​and template distributor modules are to be implemented in Parser. Also tosca2heat and yang2tosca modules implemented in Parser projects will be utilized. ​+    * Parser project: policy2tosca module ​is to be implemented in Parser. Also tosca2heat and yang2tosca modules implemented in Parser projects will be utilized. ​
     * OpenStack Heat: project will use Heat as one of the template consumers     * OpenStack Heat: project will use Heat as one of the template consumers
     * SDN Controllers:​ project will add SDN (e.g., ONOS and ODL) applications to act as template consumers     * SDN Controllers:​ project will add SDN (e.g., ONOS and ODL) applications to act as template consumers
Line 103: Line 115:
  
 Release C plans: Release C plans:
-  * Implement policy2tosca module 
   * Specification for example service policy for VNF auto-scaling and service model    * Specification for example service policy for VNF auto-scaling and service model 
   * Specification for example service policy for VNFFG auto-scaling and service model   * Specification for example service policy for VNFFG auto-scaling and service model
   * Implement template distribution to a single OpenStack Heat instance   * Implement template distribution to a single OpenStack Heat instance
 +  * Implement template distribution to two OpenStack Heat instances
   * Implement template distribution to a Heat instance and an ONOS instance   * Implement template distribution to a Heat instance and an ONOS instance
   * Implement use case 1 (orchestration templating) and use case 2 (API templating)   * Implement use case 1 (orchestration templating) and use case 2 (API templating)
project_proposals/domino.1457516362.txt.gz · Last modified: 2016/03/09 09:39 by Ulas Kozat