User Tools

Site Tools


meetings:multisite

Multisite Project Meeting

Logistics

Agenda of the week

  • Agenda is sent in the OPNFV mail-list every week: opnfv-tech-discuss@lists.opnfv.org
  • Please subscribe the mail-list for the agenda

Past Meeting Agenda and Minutes

Mar.10, 2016: Agenda & Minutes

Mar.3, 2016: Agenda & Minutes

Feb.25, 2016: Agenda & Minutes

Feb.18, 2016: Agenda & Minutes

Jan.28, 2016: Agenda & Minutes

Jan.21, 2016: Agenda & Minutes

Jan.14, 2016: Agenda & Minutes

Dec.10, 2015: Agenda & Minutes

Dec.3, 2015: Agenda & Minutes

Nov.19, 2015: Agenda & Minutes

Nov.5, 2015: Agenda & Minutes

Oct.14, 2015: Agenda & Minutes

Oct.1, 2015: Agenda & Minutes

Sept 24, 2015: Agenda & Minutes

  • Agenda
    • JIRA ticket follow up and sprint plan for B release
    • Use case 4: Centralized service for resources management and/or replication (sync tenant resources like images, ssh-keys, security groups, etc)
    • one solution for quota management "https://etherpad.opnfv.org/p/centralized_quota_management".
  • Minutes
    • jira ticket and sprint (joehuang, 08:06:02)
      • <joehuang> could we finish review for use case 1/2/3 before the OCT 25, so that we finish the sprint 1, after that we spend 2 month for use case 4/5 review
      • [16:08] <joehuang> and we are now discussing use case 4 and later use case 5
      • [16:08] <joehuang> there are use case 1/3 in review/approve process in gerrit
      • [16:09] <joehuang> and because Colin has some issue in access gira, I will help him prepare use case 2 for review and approve
      • [16:09] <sorantis> I’ll review the commits asap
    • use case 4 discussion (joehuang, 08:11:56)
    • [16:53] <joehuang> I also have another idea to have a standalone service for distributed cloud, used for post control for quota , but group resource view and proactive on demand replication for ssh keys/ image/seg
    • [16:54] <joehuang> but service provisioning for VM/volume/network will be directly called to each region seperately
    • [16:55] <sorantis> ok, that’s what I’m also aiming for. Have a service working aside, quite transparent, which has zero impact on the openstack codebase
    • [16:55] <joehuang> even for ceilometer part ( use case 5 ), view will be generated with task maner and collect information on demand
    • [16:57] <joehuang> the centralized service will collect usage from each region periodicly, and send alarm to tenant if the quota is exceeded
    • [16:57] <joehuang> this will be post control
    • [16:58] <sorantis> without any action taken?
    • [16:59] <joehuang> if need, then your proposal is a good compliment
    • [17:01] <sorantis> good. will you have time to describe your idea on etherpad before the next meeting ?
    • [17:01] <joehuang> I'll

Sept 10, 2015: Agenda & Minutes

Sept 3, 2015: Agenda & Minutes

Aug 27, 2015: Agenda & Minutes

Aug 20, 2015: Agenda & Minutes

Jul 23, 2015: Agenda & Minutes

Jul 16, 2015: Agenda & Minutes

  • Agenda
  • Minutes
    • use case 1 identity service management (joehuang, 08:06:19)
      • 08:16:41 <colintd_> My point is that we should be clear about what changes we want to allow if the network is partitioned, and then critically how the resulting system converges when the partition ends.
      • 08:17:23 <colintd_> Many telcos have a very hard requirement that geographic sites should be able to operate as isolated entities to deal with earthquake. flood, fire, etc knocking out interconnects.
      • 08:17:53 <colintd_> They normally need to ability to make changes in the isolated site to deal with changing circumstances
      • if not each site installed with KeyStone service, we have to process "Escape from site level KeyStone failure" use case. I'll go on the prototype, and let's mark what colin's concerns and see how to address it (zhipeng, 08:24:52)
    • use case 2. VNF high availability across VIM
      • 08:32:37 <zhipeng> #agreed #2 is intrasite clouds, #3 is intersite clouds
      • 08:38:13 <colintd_> To my the biggest difference between #2 & #3, is that #2 is all about maintaining media/signalling and calls (which requires IP transfer), whilst #3 is about restoring/continuing service but most likely not calls.
      • 08:38:49 <colintd_> (use case)#3 does not require special openstack networking support, (use case)but #2 does.
      • 08:39:32 <zhipeng> #agreed inter-cloud intra-site L2 and L3 networking enhancement is one requirement from OPNFV Multisite to OpenStack
      • 08:42:06 <joehuang> Agree, but we need to describe that from two aspect, one is for VNF communication to other VNF (inter-VNF), another one is for VNF internal communition for heart-beat, session replication(intra-VNF)
      • For L2 the major requirements relate to config/management of those networks. For L2 do you need to use provider networks? Exactly how do you disable anti-spoof support? etc. For L3 it might make sense to have a common neutron api for the take IP/ free IP support, which can then be plumbed onto multiple underlying technologies. In fact it (zhipeng, 08:43:59)
      • may even make sense to use the same API for L2, just have it trigger GARP. (fzdarsky, 08:44:42)
      • 08:46:12 <colintd_> The latter leads onto cross-cloud tenant networks
      • 08:46:47 <joehuang> agree, it's cross-cloud tenant networks for intra-VNF traffic
      • 08:57:06 <zhipeng> #agreed second req out of use case #2 given intra-site inter-cloud enhancement on L2/L3 IP traffic transfer (inter and intra VNF) is a requirement from OPNFV Multisite to OpenStack
      • ACTION: colintd_ to reword the req to be more accurate :) (zhipeng, 08:57:26)

* link(meetbot doesn't work well, only log linked here, refer to "Minutes" section for the summary):

Jul 9, 2015: Agenda & Minutes

* link:

Jul 2, 2015: Agenda & Minutes

  • Agenda
    • use cases prioritization
      • B-release goal discussion
      • minimum viable use cases
      • want but at risk use cases
      • out plan use cases
  • Minutes
    • use cases merged:
      • Use case 1: multisite identity service management
      • Use case 2. VNF high availability across VIM
      • Use case 3. VNF Geo-site Redundancy
      • Use case 4. Centralized service for resources management and/or replication (sync tenant resources like images, flavor, ssh-keys, security groups, etc)
      • Use case 5. centralized monitoring service
    • proposed use case prioritization, 1, (2), 3,4,5. But the diversity is still on the use case 2, for those who need this deployment scenario, it's the highest priority, but for those who don't want, it's the lowest one.
    • at least deliver use cases/requirements/gap analysis for B-release, for BP/spec/code approvement, it's up to OpenStack

Jun 25, 2015: Agenda & Minutes

  • Agenda
    • use case 2 discussion:
      • IP movement between OpenStack instances
      • Cross Neutron L2/L3 networking for heart-beat/state(session)replication
      • image replication between OpenStack instances
  • Minutes (the meetbot doesn't work today)
    • [16:07] <joehuang> #info today's agenda: IP movement between OpenStack instances, Cross Neutron L2/L3 networking for heart-beat/state(session)replication
    • [16:08] <joehuang> #info if we have time, then image replication between OpenStack instances
    • [16:10] <joehuang> How about address-pair to be used in the IP movement to archieve the HA for VNF across OpenStack
    • [16:15] <joehuang> the IP address to communicate with other VNF/PNF should be an IP address from provider network
    • [16:15] <joehuang> because the VNF has to talk to PNF at the pratical deployment
    • [16:26] <sorantis> but do we necessarily need to automate this with provider network devices?
    • [16:29] <sorantis> I guess, you can configure the physical router prior to, let’s say MME deployment? and then provide the configured IP addresses upon deployment of MME.
    • [16:31] <sorantis> you can have a separate tool working with that specific provider physical hardware and producing these IP addresses, that later on could be used as deployment parameters for MME
    • [16:31] <sorantis> if you mean that this tool is part of VNFM then yes
    • [16:32] <sorantis> on the other hand why should VNFM be concerned about this.
    • [16:32] <sorantis> A VNF can expect some input parameters. The values for this parameters can come from anywhere, as long as they are correct for deployment
    • [16:33] <sorantis> so I wouldn’t necessarily harden this function into VNFM
    • [16:33] <joehuang> just want to clarify whether these IP address management should be the responsibility of VIM
    • [16:34] <joehuang> we can exclude this functionality (IP address management, which is used for interaction among VNF/PNF) from VIM, i.e from OpenStack
    • [16:35] <sorantis> i think so
    • [16:35] <sorantis> i mean at least for this case
    • [16:36] <tallgren> Sorry to interrupt: some SDN solutions want to manage the IPs for the VNFs, so this should be included
    • [16:37] <sorantis> can SDN be used for that?
    • [16:37] <joehuang> these SDN controller running above OpenStack, and manipulate VNF, isn't it
    • [16:39] <sorantis> SDN controller interacts with network devices, no? sorry, I’m not very familiar with SDN
    • [16:39] <joehuang> to tallgren: can you eloborate how to do that
    • [16:40] <tallgren> When your VNF starts and creates a port, it gets an IP from the SDN controller
    • [16:41] <fzdarsky> SDN controller acting as DHCP server
    • [16:41] <joehuang> but currently, the IP/mac is allocated in Neutron, or with one driver
    • [16:41] <sorantis> so can we say that this is the SDN controller’s responsibility to provide an IP address for the VNF?
    • [16:42] <fzdarsky> that would be an SDN controller plugged under Neutron, right?
    • [16:42] <sorantis> yes
    • [16:43] <fzdarsky> With PNFs, the OSS/BSS currently configures FQDNs (via the EM) and the PNFs then resolve to IP via DNS.
    • [16:44] <tallgren> When you create a network in OpenStack (Neutron), you need to know how the IP management will be done
    • [16:44] <fzdarsky> The IP being assigned from a managed address pool by DHCP.
    • [16:44] <tallgren> Yes
    • [16:45] <joehuang> so the IP is from a VLAN provider network, or as a floating IP from external network
    • [16:46] <joehuang> from your description, it's from a VLAN provider network?
    • [16:49] <joehuang> ok, how the standby will get the IP before it become the master, if the mater failed
    • [16:49] <fzdarsky> Is there a figure somewhere that illustrates the use case?
    • [16:49] <joehuang> we can draw a figure after the meeting, but not now
    • [16:50] <tallgren> BTW, I am not sure if this is a side track, but floating IPs do not really work with IPv6
    • [16:50] <xiaolong> sorry, I change a littel bit the subject of discussion, is there any VNF which should be d eployed across multiple site or multiple openstack instances?
    • [16:50] <joehuang> #action joehuang draw a figure for the use case 2, especially the IP address https://wiki.opnfv.org/_media/multisite/vnf_ha_across_vim.png
    • [16:51] <joehuang> this is the use case 2. two openstack instances in one site, and the master in one OpenStack, the standby in the other one
    • [16:51] <tallgren> I would not deploy a VNF across OpenStack instances
    • [16:51] <fzdarsky> xiaolong, not a single VNF instance, but one active and one standby instance.
    • [16:59] <xiaolong> ok, thanks for the explanation. I think we need more clear definition about the use case
    • [16:59] <fzdarsky> If we talk about failover across regions that's a different thing
    • [17:00] <sorantis> failover between AZs technically means that you have to VNF instances running in the same cloud
    • [17:00] <fzdarsky> –> different API endpoints, no coordination between regions
    • [17:00] <joehuang> It's difficult to draw a conclution today to include the IP address management or not. Let's continue discuss next time. Before next meeting, hope that we can discuss in mailist. And whether the use case shoudl be addressed
    • [17:00] <xiaolong> I also need to understand if we need to have a so complicated architecture : multi-AZ, multi-region inside multi-site

Jun 18, 2015: Agenda & Minutes

  • Agenda
    • Gap analysis for use case 2
  • Minutes
    • I'm was focussing on the need to be able to redirect external traffic between application instances sitting in one or more "local" clouds. (colintd, 08:36:58)
    • Where the example was suggested (I missed who) of a VNF implementing VRRP, with one instance in each of two clouds, and the question being what config/changes are needed to allow this to work ( is anti-spoof an issue) (zhipeng, 08:38:17)
    • I'm was focussing on the need to be able to redirect external traffic between application instances sitting in one or more "local" clouds. (colintd, 08:38:43)
    • We also talked about SND controllers, and how in many telco deployments these control much broader end-to-end traffice than simply intra-cloud. They might however be implemented using multiple redundant control nodes, say one per cloud, but providing a global function. (colintd, 08:39:46)
    • In this case neutron may be used to "connect" to those networks, but isn't the major control interface for the whole system, just a "joining" interface (colintd, 08:40:17)
    • Finally, returning to traffic failover, we talked about how for L2 failover can be triggered by apps just using GARP, but L3 requires protocol level (BGP/SDN) API. We also talked about how L3 convergence times (say BGP) might be too slow by default, especially in error cases (loss of node) as opposed to managed failover. (colintd, 08:42:03)
    • <colintd>Moving the IP address is required when you have a core node serving lots of remote endpoints in the external network (e.g. voip phones with RTP streams). On failover it is too slow to resignal all of those, so you need to redirect traffic to the new node. Given routing is based on IP address, this needs to be moved
    • We are looking at similar function between clouds, for which this may or may not need to be extended. May also be interplay with provider networks. (colintd, 08:58:18)
    • I will investigate before the next meeting. (colintd, 08:58:27)

* link:

Jun 11, 2015: Agenda & Minutes

  • Agenda
    • Gap analysis for use case 2, use case 4.1.2, use case 4.1.3, If we have enough time, then discuss use case 4.1.1 and 1
  • Minutes
    • Centralized resource management required for multi-site resource management. (Malla, 08:09:40)
    • Keystone supports regions concept as part of an endpoint. This enables communicating with all the registered regions via corresponding endpoints (sorantis_, 08:16:50)
    • promise project is working on resource management, maybe we can get some information if discuss with promise project folks. (Malla, 08:17:23)
    • <joehuang>does promise support multi-site?
    • <xiaolong>to make things cear, let's talk about a concret use case: a user wants to know his total virtual resources (cpu, ram, disk) across multiple regions and multiple openstack instances, how can he do that?
    • <sorantis_> can the user use a "for" loop?
    • <xiaolong> no, if it needs that the user programs himself, it is a problem
    • <sorantis_> I think creating another layer of APIs just for the sake for hiding a for loop is overkill
    • some "for" loop multi-tenancy discussion here
    • <xiaolong> let's think more about the user cases, without talk about the presentation layer, is there any more operation we should do beside the simple "for loop", at least, aggregation (sum), sort, select?
    • <joehuang> Is there any one want to use Cells with shared Nova,Cinder,Neutron…in multisite?
    • <sorantis> cells probably are best to use withing a large datacenter
    • AGREED: : cells probably are best to use withing a large datacenter (joehuang, 08:46:21)
    • <sorantis> for me the stated use-case easily be addressed with a simple iteration
    • <joehuang> But if you look at the use case 4.1.4
    • <joehuang> The resource utilization also should be controlled by quatos
    • Quota and usage discussion
    • <joehuang> Total resource view means your total quota in multi-site
    • <sorantis> if we user the same definitions as in nova, neutron, cinder, etc. then quota is a limit you apply on a resource type. Quota usage is the amount of resources currently in use

* link:

Jun 4, 2015: Agenda & Minutes

  • Minutes
    • Colin explains on the Geo-Redundancy (zhipeng, 08:09:57)
    • Colin mentions that the current cinder and swift replication scheme risks fault propagation (zhipeng, 08:26:49)
    • Joe explains various currently available OpenStack solutions for multi site scenario (zhipeng, 08:34:34)
    • Dimitri mentions that Nova Cell is now built in and may take momentum (zhipeng, 08:37:09)
    • Dimitri says CERN, Rachspace, Godady all use Cell, although with modification (zhipeng, 08:46:08)

* link:

May 28, 2015: Agenda & Minutes

  • Minutes
    • Joe explains the Keystone multisite failover use case (zhipeng, 08:15:33)
    • no question for Keystone Use Case, Joe go on to explain the VNF HA multisite Use Case (zhipeng, 08:20:15)
    • for HA Use Case, two main gaps are targeted (zhipeng, 08:25:06)
    • overlay l2 network and shared floating IP, across multiple sites (zhipeng, 08:25:32)
    • Jun Li (zhipeng, 08:26:49)
    • Xiaolong suggests to have requirements, besides use cases (zhipeng, 08:28:33)
    • Xiaolong suggests to have a centralized platform for mgmt, so item 4 and 5 maybe should be merged (zhipeng, 08:30:17)
    • Malla question about Keystone (zhipeng, 08:32:50)
    • Malla asks it seems the current Keystone solution is already distributed, and can support the requirement (zhipeng, 08:36:44)
    • Joe answers that even if Keystone could be deployed distributed across multisite, it would be almost impossible to manage the DBs (zhipeng, 08:38:14)
    • Malla asks for VNF HA use case, how to hand over to the standby VM seemlessly (zhipeng, 08:39:09)
    • Joe answers that when standby VM detects the failure, or an arbitrator detect the failure, then the handover process starts (zhipeng, 08:40:40)
    • Joe asks Xiaolong which token type would he prefer, UUID, pki or fernet (zhipeng, 08:44:18)
    • Xiaolong expressed maybe pki are prefered (zhipeng, 08:44:43)
    • Resource quota is very difficult in multi-site openstack. (malla, 08:49:33)
    • Malla asks for Telco WG, if they've alreay started the same work (zhipeng, 08:49:41)
    • Joe answers that there is no similar work currently (zhipeng, 08:51:12)
    • Malla suggest to invite OpenStack multisite guys over to the meeting (zhipeng, 08:52:50)
    • maybe a special meeting with North America guys in OpenStack (zhipeng, 08:53:46)

May 07, 2015: Agenda & Minutes

  • Agenda
    • Self introduction
    • Discussion of work plan
  • Minutes
    • Xiaolong propose to have a wiki for collboration (zhipeng, 08:19:48)
    • Uli propose to use etherpad for early discussion (zhipeng, 08:20:39)
    • AGREED: use Etherpad for early discussion, wiki editing will begin as soon as any conclusion reached at the Etherpad (zhipeng, 08:24:03)
    • AGREED: gather use cases from each committer first, and then do the categorization and owner appointment (zhipeng, 08:29:54)
meetings/multisite.txt · Last modified: 2016/03/11 00:49 by Chaoyi Huang