User Tools

Site Tools


doctor:meetings

This is an old revision of the document!


Doctor Team Meetings

See https://wiki.opnfv.org/meetings/ .

Agenda of the next meeting and latest meeting minutes: https://etherpad.opnfv.org/p/doctor_meetings


June 9, 2015

Agenda:

  • BP Status
    • Nova
    • Ceilometer
  • Deliverable status
  • AoB
    • committer list update

IRC Meeting Logs: http://meetbot.opnfv.org/meetings/opnfv-meeting/2015/opnfv-meeting.2015-06-09-13.02.html

June 2, 2015

Agenda:

  • BP Status
    • Nova
    • Ceilometer
  • Deliverable Status

IRC Meeting Logs: http://meetbot.opnfv.org/meetings/opnfv-meeting/2015/opnfv-meeting.2015-06-02-13.02.html

May 26, 2015

Agenda:

  • OpenStack Summit report (related to Doctor)
    • Doctor Breakout Session
    • Ceilometer: event alarm
    • Nova session(s)
  • Short information from informal meeting with ETSI NFV REL, OPNFV HA and OPNFV Doctor at NFV #10 in Sanya
  • BP Status
    • Nova
    • Ceilometer
  • Deliverable
  • Promotions

IRC Meeting Logs: http://meetbot.opnfv.org/meetings/opnfv-meeting/2015/opnfv-meeting.2015-05-26-13.02.html

May 19, 2015

Canceled

May 12, 2015

Agenda:

  • Deliverable
    • Status of Deliverable.
    • Vote on document approval (i.e. declare it stable).
  • Status of BPs
  • OpenStack Summit
    • Preparation for Doctor session
    • Related summit sessiones
      • Copper session
      • May 21, 9:50-10:30, Design Summit Ceilometer "Event alarms"
  • Meeting with ETSI NFV REL at NFV #10
    • Joint work w/ ETSI NFV REL "Active Monitoring"

Participants: Gerald Kunzmann, Ryota Mibu, Carlos Goncalves, Bryan Sullivan, Adi Molkho, Dan Druta, Michael Godley, Maryam Tahhan, Tomi Juvonen, Tommy Lindgren, Gurpreet Singh

Minutes:

  • use IRC instead of Etherpad?
    • Ryota will check technical issues, e.g. using the MeetBot
  • Deliverables
    • we have not finished yet
      • further comments received today by E ; we also have to docx files from Dan
      • few minor syntax errors when compiling the document (in patch set 6)
      • Carlos is working with Octopus team to auto-generate HTML/PDF version of the document, but still buggy (false-positive in jenkins-ci)
      • can we get concensus on the content? would be nice to have a stable version of the document.
    • Voting can also be done via gerrit or email approval (tech-discuss list)
      • common in open source community; we can publish a stable version and then implement bugfixes afterwards
      • no objections in the call on the current version of the deliverable
  • Status of BPs
  • OpenStack Summit
    • Tomi will mainly join the Nova sessions
    • Preparation for Doctor session
      • Monday 5pm (after Promise session)
      • Ryota is preparing presentation slides based on deliverable and Prague slides
      • will provide slides by Thursday to collect comments by Doctor team; Bryan asks to upload the draft slides beforehand to discuss it as soon as possible
      • collaboration with other OPNFV project team and other SDOs and communities
    • Ceilometer session
    • Copper session
  • ETSI NFV at NFV#10
    • a meeting with REL is scheduled for ETSI NFV meeting in Sanya
    • Dan, Tommy, Gurpreet, Obana-san (DOCOMO) will be there
  • Joint work w/ ETSI NFV REL "active monitoring"
    • proposal for bi-weekly meeting (still under discussion)

May 5, 2015

Agenda:

  • Status of Deliverable
  • Status of BPs
  • Participation at OpenStack Summit

Minutes:

  • Status of Deliverable
  • Status of BPs
  • Participation at OpenStack Summit
  • AOB
    • Tommy: in document we state "Inspector might be based on Monasca"
      • Carlos: originally proposed by NEC; integrated with OpenStack; NEC found some "bugs" and "gaps" (e.g. delay is significantly more than 1s); meet them in Vancouver; it is a candidate, but no other platform seems to be integrated in OpenStack
      • Gerald: meeting with Fujitsu on Monasca two weeks ago
      • Carlos: pluggable architecture, could support Nagios or Zabbix
      • Gerald: in Monasca there is currently no requirement to do reporting within 1s
    • Last meeting with REL
      • has Tommy plan to meet with ETSI NFV REL? if time allows
    • ETSI NFV IFA
      • IFA documents not yet open to public

April 28, 2015

Joint meeting with ETSI NFV REL team. Agenda:

  1. Identify Purpose of the call
    • Collaboration kick-off
  2. NFV REL:
    • Project Overview
    • NFV upgrade
    • Active monitoring and failure detection
  3. OPNFV Doctor:
    • Project Overview
    • Use cases
  4. Collaboration methodology discussion
  5. Wrap-up

Minutes:

  1. Purpose
    • Ryota: know each other; see how to work together; further technology discussion needed at later stage
    • Markus Schoeller (NEC): no IPR declarations today, today only exchange of public information
    • policies how to work together w.r.t IPR etc should be defined for later work
    • Gurpreet: high-level of Doctor project; fault-detection and management; what are use cases of Doctor?
  2. NFV REL introduction (Markus Schoeller)
    1. Project overview: see ETSI/NFVREL(14)000200)
      • dedicated reliability project
      • Ryota: target size / number of applications?
      • Tommy: which work items focus on VIM part? indirectly addressed in monitoring and failure detection. scalabilty per se has some impact on VIM
      • Tommy: this means "monitoring and failure detection" would be the main crossing point with Doctor? so far yes, but in next meeting new WIs may be created
    2. NFV software upgrade mechanism (Stefan Arntzen - Huawei)
      • different to traditional upgrades: "old traffic" can still go to "old software version", whereas new traffic/connections can go to the new s/w version in parallel (this is enabled by virtualization); no hard switchover needed; old system/version is still running and it can be switched back in case of issues with the new version
      • assumption is that this can be done stateless (otherwise it would be more complex)
    3. Active monitoring for NFV (Gurpreet)
      • Alistair Scott: interested in passive monitoring; where as attachment points for passive monitoring? REL has not looked in passive monitoring for NFV
      • Gurpreet: identify use cases where current implementation has gaps
  3. OPNFV Doctor
    • Stefan: plan to use OpenStack components?
    • Ryota: we are not only focusing OpenStack, but open source in general
    • Tommy: but OpenStack is the primary s/w used in OPNFV
    • Gurpreet: work flow for upstream community?
    • Ryota: define requirements, gap analysis, provide blueprints, but no coding in Doctor project
  4. Next action:
    • arrange meeting in the next NFV event
    • keep in touch

April 21, 2015

Agenda:

  1. Deliverable
    • Structure: uploaded to Gerrit and split into multiple files; need consensus from community
    • Propose requirement project deliverable template based on Doctor's (WIP: Carlos, Ryota, Ikdiko)
    • Review comments received so far
  2. Blueprints

Minutes:

  • Status of BPs
    • Nova BP
      • concept has been acccepted
      • single API to mark down nova-compute and change status of VMs
      • the scope has been narrowed, topic was modified to "mark-host-down"
    • Ceilometer BP
      • trying to have summit session regarding Ceilometer event topic
      • demo that ryota mentioned ingerrit is the same as the prague hackfest
  • Deliverable
    • We still have review comments which are not reflected to doc yet
    • RST files has been splited, the format would be template for other requirement projects
    • how we can publish …
  • Inspector API
    • API point is OK
    • action(doctor): describe framework and inspector API
  • Logistics
    • from next week, we will start to use IRC (e.g. sharing links)
    • at #opnfv-meeting channel
  • Next meeting
    • joint meeting with NFV REL
    • action: Ryota to send out agenda

TBD

fill missing meetings

March 5, 2015

Agenda:

Minutes:

  • Status of Document
    • 3.1 there is unclaified maintenance usacese –> Tommy will send ETSI Doc Ph2 to us
    • 3.1.2 text is missing –> Gerald can add text
    • 3.5
    • 4 Gap analysis: are there anyrelated BPs that should be added as references? –> BP links should be added
      • Carlos will help check if there are blueprints already filled in and add to the document
    • 5 Detailed implementation plan –> all, please check this chapter
    • Fig.9 what is maintenance sequence loke like? –> TBD (tentative FB and sequence are proposed)
    • Schedule
      • 1week for Doctor internal review (-3/17)
      • 2 week for OPNFV community review (-3/31)
      • 2 week for Doctor for team work (-4/14)
      • (OPNFV release 1 4/23)
  • Status of BPs: not handled

March 5, 2015

Ad-hoc meeting for blueprint planning

Agenda:

  • Discuss BPs
  • Report on meeting with HA team

Minutes:

  • Presentation by Ryota on Ceilometer and where our BPs fit to the Ceilometer architecture (see slides)

March 3, 2015

Agenda:

  • BPs alignment
  • Mike from Intel / swfastpathmetric will join this team
  • Hijacking the doctor meeting to discuss blue-prints next week for all projects
  • Doc status

Minutes:

  • BPs alignment
    • Bring BP topic also to TSC
    • BP should have a list of parameters/data missing
    • OpenStack BPs shall be in this format (using OpenStack terminology such that a developer can read/understand it)
    • Proposal to have a high-level description of the BPs in the Wiki
    • Ceilometer is the right place to implement such feature, although other alternatives may exist
    • TODO(Ryota): prepare slides, provide IRC available time
  • Hijacking the doctor meeting to discuss blue-prints next week for all projects:
    • we should keep 30min for discussing the requirement deliverable
  • Doc status: not handled

Feb 24, 2015

Requirement project round table @ Prague Hackfest

Participants: Ryota (NEC), Gerald (DOCOMO), Bertrand (DOCOMO), Ashiq (DOCOMO), Tomi (Nokia), Tommy (Ericsson), Carlos (NEC), Gianluca Verin (Athonet) Daniele Munaretto (Athonet), Sharon (ConteXtream), Christopher (Dorado Software), Russell (Red Hat), Frank Baudin (Qosmos), Chaoyi (Huawei), Al Morton (AT&T), Xiaolong (Orange), (Oracle), Randy Levensalor (CableLabs) …

Slides can be found here: https://wiki.opnfv.org/_media/doctor/opnfv_doctor_prague_hackfest_20150224.n.pptx

Minutes:

  • Use case 1 "Fault management"
    • Main interest: northbound I/F
    • Reaction of VNFM is out of scope
    • VM (compute resouces) is the first focus, storage and network resources will follow at later stage
  • Fault monitoring: plugable architecture is needed to catch different (critical) faults in NFVI to enable use of different monitoring tools. Predictor (fault prediction) may also be one input.
  • 4 functional blocks:
    • controller (e.g. Nova), monitor (e.g. Nagios, Zabbix), notifier (e.g. re-use Ceilometer), inspector (fault aggregation etc)
  • VM state in resource map, e.g. "fault", "recovery", "maintenance" (more than just a heartbeat)
  • Question of whether other OpenStack components (e.g. Cinder, Glance, etc) can report events/faults
  • What is the timescale to receive such fault notification? this would be helpful for the motivation in the blueprints. Telco nodes: i.e. less than 1s, switch to ACT-SBY as soon as possible.
  • Preference is event based events, not polling. should be configurable.
  • Telco use case would have few hundreds of nodes, not thousands of nodes.
  • Demo 1 (using default Ceilometer) takes approximately 65 seconds to notify the fault (90 seconds total including spawning new VM), while demo 2 only takes ⇐ 1 second (26 seconds total)
  • Pacemaker is running at application layer; different scope.

Feb 23, 2015

Doctor/Fastpathmetrics/HA Cross Project Meeting @ Prague Hackfest

Goal:

  • reduce conflicts between requirements projects

Minutes:

  • Project Intro:
  • Identify Overlap:
    • NB I/F
      • Doctor is also requiring fast reaction. objective with HA is similar.
      • HA has more use cases and may send more information on the northbound I/F. VNFM should be informed about changes.
      • Doctor objective is to design a NB I/F.
        • Does HA already have flows available?
        • HA is focusing on application level. Reaction should be as fast as possible. Including the VNFM may slow down the progress.
        • In Doctor we will follow the path through VNFM.
        • In ETSI we have lifecycle mgmt, where VNFM is responsible for the lifecylce
    • There are certain information the VNFM doesn't know about. In Doctor we call it "consumer".
    • Proposal to do use case analysis for HA. Which use cases may require the VNFM to be involved? "Doctors" will have a look at HA use cases.
    • How is the entity to resolve race conditions? Some entity in the "north".
    • What about a shared fault collection/detection entity instead of collecting the same information 3 times?
      • Predictor could also notify immediate failures to Doctor.
    • Security issues are not addressed in Doctor. Currently assuming a single operator, where policies ensure who can talk to who.
    • In Doctor we do not look at application faults, only NFVI faults.
    • Huawei: we use Heat to do HA. if one VM died and Heat will find Scaling Group less than 2, it will start a new VM. This may take more than 60s, we need to find something faster for HA. Heat doesn't find error in the applications.
    • Failure detection time is an issue across all projects.
    • Which metrics of fastpath would Doctor be interested in? need to check in detail. Action Item to send metrics to Doctor.
    • Hypervisor may detect failure of VM and take action.
      • Other failures: VM is using heartbeat. it will e.g. reboot after not receiving a heartbeat for 7s.
    • Doctor: if VIM takes action on its own it may conflict the ACT-SBY configuration at the consumer side. this is why the consumer should be involved.
    • Which project would address ping-pong issue that may arise?
    • We need subscription mechanism including filter (which alarms to be notified about). Mapping VM-PM-VNFM can be recorded during the instantiation.
    • Relationship between Doctor and Copper:
      • policy defines e.g. when VIM can expose its interface
      • When to inform a fault, whom to inform etc is all a kind of policy.
      • Copper has both pro-active and reactive deployment of policies. In reactive case, there may be conflict when both Copper and Doctor receive the policies.
  • Wrapup:
    • Overlap in fault management
    • FastPath: monitor traffic metrics; Doctor will need some of the metrics in the VIM. plan to do regular meetings.
    • HA: large project with wider scope than Doctor, different use cases. direct flow (to be faster). task to check each others NB I/F in order not to block each other.

Feb 17, 2015

Agenda:

  • Ashiq's proposed agenda for the Prague Hackfest next week
  • Doctor PoC Demo
  • Document Status

Minutes:

  • Participants: Ryota Mibu, Khan Ashiq, Gerald Kunzmann, Carlos Goncalves, Susana, Thinh Nguyenphu, Tommy Lindgren, Bryan Sullivan, Bertrand Souville, Michael Godley, Manuel Rebellon, Uli Kleber
  • Hackfest
    • Ryota to prepare some slides on what's going to be presented in the demo by end of week
      • Carlos will help Ryota
    • Requirement projects are scheduled for Tuesday
    • Demo:
      • OpenStack Controller, Zabbix, 2-3 OpenStack compute servers to launch VMs, client to stress the system, Neutron, Nova, LB as a service, Heat, Ceilometer
      • Destroy one of the VM running a WebService
      • Key message to OpenStack? Which gap do you want to present? Why use Zabbix instead of Ceilometer (show first gap in our list)? Prepare for such questions.
  • Document status
  • HA and fault prediction project and "Software FastPath Service Quality Metric" project
    • Proposal to meet Monday afternoon after BGS project meeting
      • Carlos will contact all potential participants

Feb 10, 2015

Agenda:

Minutes:

  • OPNFV should be careful with tools projects use and distribute as part of the platform due to their licensing
  • Framework should be modular enough to be pluggable with multiple monitoring solutions
  • Editors for each first deliverable section were assigned
  • Gap analysis to be further extended
  • Section editors should have an initial draft ready by Feb 18
  • Deliverable editors (Gerald and Ashiq) will have Feb 19-20 to compile everything together for the Prague Hackfest

Feb 6, 2015

Extra meeting for Implementation Planning

Agenda & Minutes:

  • Implementation Planning
    • Topic and agreement can be found in Slides.

Feb 2, 2015

Agenda:

Minutes:

  • Participants: Carlos Goncalves, Don Clarke, Ryota Minu, Tomi Juvonen, Yifei Xue, Al Morton, Bertrand Souville, Gerald Kunzmann, Manuel Rebellon, Ojus K. Parikh, Ashiq Khan, Pasi, Paul French, Charlie Hale,
  • Ryota presents a refreshed Timeline
    • Initial draft of requirement document should be ready before the Hackfest 23-24 Feb in Prague
    • Ashiq asks about task allocation. See: https://etherpad.opnfv.org/p/doctor
    • Target architecture is OpenStack; Implementation plan is on how this will be realized in upstream projects, e.g. interfaces.
      • one proposal is using Zabbix. all is already there.
  • Predictor project:
    • still in proposal phase. we should keep eye on it. it has relation to Doctor
  • Implementation plan:
    • for evacuation we should stay implementation independent, not OpenDayLight or Neutron (they may use it in the actual testbed, but we should restrict Doctor to the interfaces definition)
    • it is not intended to use Ceilometer, but a similar service.
      • Agreement to use Zabbix for the GapAnalysis.
      • Doctor will have its own RestAPI as wrapper abstracting the in use monitoring solution underneath (e.g. Zabbix)
    • it is necessary to be able to isolate a faulty machine, such that new VMs are not started on this machine.
    • different ways/workflows for recovery; we should start by implementing a few sample workflows
      • e.g. switch to active hot standby VM, then instantiate a new hot standby instance (this is a Doctor requirement)
      • evacuation (if time allows) vs active hot standby (immediate action)
      • VNFM is deciding about the best action (this is out of scope of Doctor; Doctor only specifies NB I/F)
    • we need to get into more details for this plan. discussion should go via email to make progress before next meeting
  • Hackfest
    • Take to the hackfest what we have, i.e. if we "only" have one implementation plan so far let's use this.
    • Doctor is planned for Tuesday. Also other requirement projects will be discussed on Tuesday.
  • Ryota did cleanup of Doctor Wiki page
  • Doctor team participation in the OpenStack Summit Vancouver?
    • related topics.
    • most important blueprints should be ready by May and could be presented there
    • Proposal: Talk on a more general topic including Doctor requirements
    • Carlos will look into it
  • Meeting time → via email

Jan 26, 2015

Agenda:

  • Discuss maintenance use case - Tommy
  • Implementation outside Nova - Tomi

Minutes:

  • Timeline milestone planning
    • Soft schedule for Fault Table, set 1 milestone end of Jan
    • Requirement Document should be finished by Mar 15 ? - No
      • Ashiq suggested doc should be finished by end of February
    • Set some milestone on Hackfest at Prague
      • First draft
    • TODO(Ryota): create wiki page
  • Discuss maintenance use case - Tommy
  • Implementation outside Nova - Tomi
    • Network resources?
      • Evacuation will move the network also regardless it being OpenDaylight or Neutron.
      • We are trying to tackle step-by-step, first focusing on Nova.
      • Ceilometer approch seems to be good rather than using metadata on Nova
        • What is the relation to Nova metadata? Ceilometer is therrible for FM. It uses polling, and suits for PM. It would be extra step causing delay. It makes a lot of network traffic. Database consumes a lot of memory.
    • Should we kick poweroff the host by Doctor?
      • One needs to fence host by powering off by shutdown trough OS (or Nova) or if reachable only with IPMI, then trough that. In some case host can be rebooted as recovery, but in most cases it is faulty and needs to be moved to disabled aggrigate or mark for maintenance. If one do not reach host at all, the evacuation trough Nova will anyhow isolate host as everything will be moved to other host (network, disk).

Jan 19, 2015

Agenda:

  • Review of timeline of Doctor project
  • List of tasks

Minutes:

Jan 12, 2015

Agenda:

Minutes:

    • Action: check and revise / update / extend
    • Some faults are specific to a certain HW other are more general
    • We should try to come up with a high level description of common faults
    • Proposal not to go to such level of detail.
    • Keep one fault table and use the current fault table for study of the scope of Doctor
    • Are there other faults that cannot be detected by SNMP and Zabbix_agent?
    • We need a tool in Doctor that can retrieve such alarms. Should this tool be integrated with OpenStack or be independent? Should be kept open;
  • ETSI meeting in Prague: proposal to meet there
    • Action: edit this page for ongoing work on the gap analysis
  • Doctor wiki page updated
  • Timeplan:
    • Action: Ryota to prepare a timeplan/timeline
    • Timeplan can be checked in each week's meeting
    • Reminder: some documents should be available by March
  • Next meeting: Jan 19th

Dec 22, 2014

Agenda:

  • work item updates
  • Fault table
  • GAP analysis template
  • Wiki pages

Minutes:

  • Work item updates
    • Fault table
      • Status: waiting Palani's initial commit
      • Tomi also made initial list of faults.
      • TODO(Tomi): Open new wiki page to share the fault list
    • GAP analysis template
    • Wiki pages
      • Our plan for wiki/doc structure seems to be OK, cause there was no question and objection in the past week.
      • TODO(Ryota): Update wiki pages
  • Fault notification at the Northbound I/F
    • Critical faults
      • It was agreed that we should characterize faults as critical ornon-critical when reporting to VNFs.
      • We must report all critical faults northbound. We may report some of the non-critical faults, need further study.
    • Fault aggregation
      • Discussed whether toaggregate different alarms and faults before notifying via northbound interfaceto VNFs.
      • General agreement that there should be some level of aggregation, butneed to figure out what events need to be aggregated.
      • Some suggested that VNFs should be notified only if the faults are urgent.
    • Notifying data center operations folks about hardware faults is something that seems to be out of scope for this project. Tomi: I think they need the information and there should not be a duplicate mechanism to detect faults to be able to make HW maintenance operations. Surely they will not need the notification that we would send to VNFM, but the actual alarm information we are gathering to make those notifications. Anyhow I agree that this is not in our scope and tools like Zabbix that we could use here can easily be configured then for this also in case HW owner is interested.
    • Why should warnings be sent to VNFs (such as cpu temp rising but notcritical yet)? VNFs might want to take action such as setup/sync hot standbyand this could take some time.
  • Are there open souce projects already to detect hypervisor or host OS faults?
    • OpenStackNova devs said it should be kept simple, providers need to monitor processes ontheir own.
    • But there appears to be some open source tools(SNMP polling or SNMP agents on host). Need to pull things together.
  • Next call will be on January 12th.

Dec 15, 2014

Agenda:

Minutes:

  • wiki/doc structure
    • Agreed to have three sections
      • UseCase (High-level description)
      • Requirement (Detail description, GAP Analysis)
      • Implementation(includes monitor tools and alternatives)
  • Faults table
    • will create table that explain stories for each fault
    • columns would be physical fault, how to detect, effected virtual resource and actions to recover
    • in three categories Compute, Network and Storage, will start on Compute first
    • also try to keep separate table/categories for critical and warning
    • TODO(Palani): provide fault table example
    • TODO(Gerald): create first version of fault table after getting table example
  • framework
    • how we handle combination of faults and feature H/W faults that is still open question
    • suggestion to have fault management "framework" that should be configurable to define faults by developers or operators
  • Gap analysis
    • We should have list of items so that we can avoid duplicated work
    • TODO(Ryota): Post first item to show example how we describe that could be template for GAP analysis
  • Monitoring
    • We should check monitoring tools as well: Nagios, Ganglia, Zabbix
  • Check TODOs from the last meeting
    • seems almost all items have done or started (but we could not check 'fault management scenario based on ETSI NFV Architecture' although there is silde on wiki)
  • Next meetings
    • Dec 22, 2014
    • Jan 12, 2015 # skip Jan 5th

Dec 8, 2014

Agenda:

  • How we shape requirements
  • Day of the week and time of weekly meeting
  • Tools: etherpad, ML, IRC?
  • Project schedule, visualiztion of deliverables

Minutes:

  • How we shape requirements
    • Use case study first
    • Gap Analysis should be included existing monitoring tools like Nagios etc.
    • How we format fault message and VNFD elements for alarms?
    • Fault detection should be designed within a common/standard manner
    • Those could be implement in existing monitoring tools separated from OpenStack
    • What is "common" monitoring tools, there are different tools and configurations
    • Focus on H/W faults
    • Do we really need that kind of notification mechanism? Can we use error from API polling, just get error detected by application or auto-healing by VIM?
      • Real vEPC needs to know fault that cannot be found by application like abnormal temperature.
      • VIM should not run auto-healing for some VNF.
      • There are two cases/sequences defined in ESTI NFV MANO that fault notification are send from VIM to VNFM and to Orchestrator.
      • Alarming mechanism is good to reduce the number of request from user who pooling virtual resource status.
    • We shall categorize requirements and create new table on wiki page. (layer?)
    • → A general view of the participants is to have the 'HW monitoring module' outside of OpenStack
    • TODOs
      • Open etherpad page for collaborative working (Ryota)
      • Collect use cases for different fault management scenarios (Ryota)
      • Set IRC (Carlos)
      • Provide Gap Analysis (Dinesh, Everyone)
      • Provide fault management scenario based on ETSI NFV Architecture (Ashiq)
      • List fault items to be detected (Ashiq, Everyone)
  • Day of the week and time of weekly meeting
    • Monday, 6:00-7:00 PT (14:00-15:00 UTC)
    • TODO(Ryota): create weekly meeting entry in GoToMeeting
  • Tools: etherpad, ML, IRC?
    • We will use opnfv-tech-discuss ML with "[doctor]" tag in a subject.
    • We will use "opnfv-doctor" IRC channel on chat.freenode.net .
    • TODO(Carlos): update wiki
  • Project schedule, visualiztion of deliverables
    • All team members are asked to check project proposal page and slides that are approved by TSC and show our schedule and deliverables.
    • Northbound I/F first specification by Dec 2014.

Dec 1, 2014

Agenda:

Minutes:

  • Project proposal
    • There were two comments at project review in TSC meeting (Nov 26)
    • Ashiq and Qiao had talked before this meeting, and agreed that we would not eliminate duplication at proposal phase
    • Project proposal was fixed by some members
      • The project categories was changed to requirement only
      • In new revision of project proposal, we removed detailed descriptions which don't suit requirement project
      • Links to original project proposal are replaced to point the new page, and the link to the old page that described further details can be found at the bottom of the new proposal page
      • We should not edit the proposal page after TSC approval to keep evidence what we planed at the beginning of the project
      • "Auto recovery" is missing, will continue discussion in mail with clarification by Tomi

Nov 17, 2014

Agenda:

  • Scoping and Scheduling (what feature to be realized in what time frame)
  • Resources available and necessary for this project
  • Technical aspects and relevance to upstream projects
  • How to socialize with upstream projects

Minutes

doctor/meetings.1433923414.txt.gz · Last modified: 2015/06/10 08:03 by Gerald Kunzmann