User Tools

Site Tools


meetings:enfv

This is an old revision of the document!


Edge NFV Team Meetings

Logistics

* Alternating Wednesdays at 9:00 pm US Pacific time 5:00 UTC Thursday (when US Standard Time) / 6:00 UTC Thursday (when US Daylight Saving Time)
Find your local time.

* <Add meeting day/time for alternating week>

* GoToMeeting https://global.gotomeeting.com/join/752333285

* IRC irc.freenode.net


Meeting Agendas and Minutes


November 25, 2015

Agenda

* 5 minute on Scope to check for additions/deletions
* Review of Residential Services topic (MK)
* Time permitting

  • Review of Service Use Cases (Andrew S)
  • Dividing up the agreed topics so that work can begin

Minutes

Edge NFV scope
No further items suggested

Review of Residential Services
Michael K presented his slides on "Disposable Containers at the Edge", initially presented during the 'Futures' track at 2015 OPNFV Summit
Question: where is the edge? Virtualization initially developed in data centers. Now the edge has moved outward. Recently "edge" represents ISP premise location closest to the customer.
Customer premises - the home - is the opposite of the data center in that compute and store are distributed rather than centralized. Orchestration as it exists was not designed for massive scale distributed endpoints such as the homes of service provider customers.
Should VNFs be deployed in subscribers' homes? Some examples illustrate why this is useful.
Examples include: WAN fault monitoring, performance monitoring, reduce traffic between the premises and service provider's network.
Introduction to Profiles (aka "templates")
Michael proposes that orchestration software will change from data center orchestration to pre-defined 'profiles' Next proposal: "Disposable" or short-deployment-time VNFs
Another potential use of VNF is to measure link metrics, e.g., to measure metrics required by FCC in U.S.A.
Building hardware or software into the end device, e.g., residential gateway to measure link metrics represents a fixed cost to the device. This functionality can be performed by a VNF that is loaded temporarily to collect the data, then removed. i.e., the VNF is transient in the device.
Question: is this transient VNF always there and only used occasionally, e.g., on a licensed basis?
Disposable means requirement for hypervisor, right?
Under the proposal the VNF is removed from the subscriber's device when a trigger occurs. Examples of trigger include time limits or external or internal usage limits.
Removing the VNF can reduce the complexity of orchestration by eliminating or reducing the need for the orchestrator to manage the lifecycle of the VNF.
Propose to use a simpler solution like profiles, think of it as a "pre-determined service chain", rather than an orchestrated service chain.
James said some service providers he has spoken with have similar ideas to what Michael proposed, e.g., static service chains previously defined and each with its own SKU.
Another chain of thoughts from Peter Willis from BT: OpenStack doesn't scale. Several fundamental features of OpenStack need to mature or be modified before they are ready for large scale ISP deployments.
James will explore inviting Peter to attend an eNFV meeting.
Michael defers the topic of security considerations to a future call.

Adva Service Use Cases
Andrew presented slides on the topic OPNFV for eNFVI Platform
https://wiki.opnfv.org/_media/project_proposals/opnfv-edge-nfvi.pptx
The presentation introduces an Edge NFVI deployment use case: CPE hosted VNF, thousands of "micro data centers", host is explicitly selected rather than orchestrated by NOVA scheduler, usually a single host at the site, all management traffic is inband due to limited number of device interfaces.
This model applies to mid- to large-size enterprises, but perhaps eventually to smaller enterprises as well. Michael suggests identifying differences between large and small enterprises. For example large enterprises may have redundant resources while smaller enterprises may not.
Use Case: WAN: uplinks 1 GE, demarcation is mandatory, tunneling over IP/MPLS WAN, underlying network might fail, tenant traffic may be intercepted
For further discussion: should architecture be 'bare metal' or virtual components at the edge?

Core Questions
Large number of small centers: what is the viable approach? Static, semi-static, full dynamic
Bare metal vs. not bare metal at edge
What is the model for getting traffic back from edge to core? What is the security model?
James proposes to subdivide the project group to sub-groups to investigate these topics.

Action Items
Define what class of enterprise this project considers in scope and document in the project charter.
James will request an e-mail reflector for eNFV from Linux Foundation


November 18, 2015

Agenda

* Review of the Scope
* Review of the Connectivity Services LSO activity in ODL and OPNFV

Minutes

Project description:
* The NFV movement is largely focused gaining the efficiencies of large data centre compute infrastructure. This is sensible, however to paraphrase the old switching/routing saying … centralize what you can, distribute what you must. Some applications naturally belong at the edge: WAN Acceleration, Content Cacheing, and depending on your philosophy, Firewalling all fall into that category. Other more specialized test applications also need to be at the edge to exercise the portions of the network under test. Similarly many business applications are most efficiently delivered at the edge.
* The purpose of this Requirements Project is to articulate the capabilities and behaviours needed in Edge NFV platforms, and how they interact with centralized NFVI and MANO components of NFV solutions
* MK propose to strike/clarify the use of the word “centralized” since large eNFV scenarios likely can’t be handled by a single NFVI instance
* Appropriate Tunneling for User Traffic across WAN (Ethernet, IP/MPLS) links
* Appropriate Tunneling for Management Traffic across WAN links
* Including reachability requirements to the compute platform
* Extending Multi-DC management to address many small "DC" locations (Jesse interested … "micro DC running on the edge")
* Monitoring Capabilities required for a remote Compute Node
* Squaring Bare Metal with remote survivability and whether IaaS is more appropriate for remote locations
* Include any architecture diagrams or specifications, reference to OPNFV requirements list.
* Jesse: IaaS needs to support VMs and Containers co-existing (Liberty is all about containers) … Google Container Service, AWS Containers … Containers will reside in the VMs
* ML: Landslide product working on Edge … just letting us all know
* MK: is interested in home … so scaling to millions … smaller number of VNFs, larger number of CNs … perhaps service chaining gets simpler
* MK: Cablelabs working with MEF. Very focused at L2 setting up interfaces and formats (whereas his interface) … MK can give overview of this.
* ML: Curious about security aspect across the WAN … how does one secure inter-VNF communication between VNFs in the chain. MK: Authentication of remote nodes to the controller … MK can share what they did at Cablelabs
* Kevin Luehrs Data Plane Optimization (TBD if eNFV needs to specifically address this, or pick up work from other groups … DPACC?)
* We discussed MANO and MK explained that OPNFV was likely to expand scope to address this.
* John pointed out that we need to Define some use cases …

Content
* Kevin shared description of OPNFV Connectivity Services LSO (LSOAPI) project and OpenDaylight UNI Manager project

  • MEF ELINE service description
  • Defined YANG model to feed into ODL
  • Instances were OVS running in Raspberry Pi … and talk to it using OVSDB
  • Service model is to connect UNI’s with GRE tunnel
  • Connectivity Services LSO is the aspect of this in OPNFV
  • Next steps is to migrate to MEF’s latest service model
  • Focused on the “SDN aspect of this problem”

November 11, 2015

Agenda

* <add agenda here>

Minutes

* <add minutes here>


Back to ENFV main project page

meetings/enfv.1448468914.txt.gz · Last modified: 2015/11/25 16:28 by Kevin Luehrs