This wiki defines the target system state that is created by a successful execution of the BGS. This target system state should be independent from the installer approach taken.
The OPNFV Target System is currently defined as OpenStack High Availability (HA) + OpenDaylight Neutron integration, across 3 Controller nodes and 2 Compute nodes. The Controller nodes run all OpenStack services outlined below in this wiki except for nova-compute. HA is defined as having at least mySQL and rabbitmq along with all other dependencies (corosync, pacemaker) working in an Active/Active or Active/Passive state.
The full hardware specification can be found outlined by the Pharos project below:
|Base OS||CentOS||7||Base OS may vary per Installer. CentOS 7 is the current OPNFV standard|
|SDN Controller||OpenDaylight||Helium SR2||With Open vSwitch|
Install only core components on all target system nodes. Additional dependencies will be included when specific packages are added. The list below contains information pertaining to each Installer, its base OS per Controller/Compute node and the extra packages installed on those nodes.
OpenStack Juno Components OpenStack Juno
|MySQL||Yes||Juno||Must be HA|
|RabbitMQ||Yes||Juno||Must be HA|
|Pacemaker cluster stack||Yes||Juno||Required for HA|
|Corosync||Yes||Juno||Required for HA|
|Cinder||Yes||Juno||Required to use Ceph Storage as Cinder backend|
Current requirements are that Cinder will be used for block storage using Ceph. There is currently no requirement for external, dedicated storage. Storage is to be implemented as a Ceph storage pool of multiple OSDs for HA along with several Ceph Monitors. The standard implementation is 3 Ceph OSDs and 3 Ceph Monitors, 1 OSD/Mon per Controller. The Controller node's internal hard drives will be used for redundant storage.
|odl-config-persister-all||0.2.6-Helium-SR2||OpenDaylight :: Config Persister:: All|
|odl-aaa-all||0.1.2-Helium-SR2||OpenDaylight :: AAA :: Authentication :: All Featu|
|odl-ovsdb-all||1.0.2-Helium-SR2||OpenDaylight :: OVSDB :: all|
|odl-ttp-all||0.0.3-Helium-SR2||OpenDaylight :: ttp :: All|
|odl-openflowplugin-all||0.0.5-Helium-SR2||OpenDaylight :: Openflow Plugin :: All|
|odl-adsal-compatibility-all||1.4.4-Helium-SR2||OpenDaylight :: controller :: All|
|odl-adsal-all||0.8.3-Helium-SR2||OpenDaylight AD-SAL All Features|
|odl-config-all||0.2.7-Helium-SR2||OpenDaylight :: Config :: All|
|odl-netconf-all||0.2.7-Helium-SR2||OpenDaylight :: Netconf :: All|
|odl-mdsal-all||1.1.2-Helium-SR2||OpenDaylight :: MDSAL :: All|
|odl-yangtools-all||0.6.4-Helium-SR2||OpenDaylight Yangtools All|
|odl-restconf-all||1.1.2-Helium-SR2||OpenDaylight :: Restconf :: All|
|odl-netconf-connector-all||1.1.2-Helium-SR2||OpenDaylight :: Netconf Connector :: All|
|odl-akka-all||1.4.4-Helium-SR2||OpenDaylight :: Akka :: All|
|Node config: Puppet|
|Example VNF1: Linux||Centos||7|
|Example VNF2: OpenWRT||version 14.07 – barrier braker)|
|Container Provider: Docker||docker.io (lxc-docker)||latest||FUEL delivery of ODL|
Describe which L2 segments are configured (i.e. for management, control, use by client VNFs, etc.), how these segments are realized (e.g. VXLAN between OVSs) and which segment numbering (e.g. VLAN IDs, VXLAN IDs) are used. Describe which IP addresses are used, which DNS entries (if any are configured), default gateways, etc. Describe if/how segments are interconnected etc.
List and purpose of used subnets as defined here: Network addressing and topology blueprint - FOREMAN
Currently there are two approaches to VLAN tagging:
There was agreed not use VLAN tagging unless the target hardware lacked the appropriate number of interfaces. It is still viable however for users who want to implement the target system in a restricted hardware environment to use tagging.
Following picture shows ODL connects to Neutron through ML2 pugin and to nova-compute through OVS bridge. (not yet finished, ceph storage will be added, approach with ODL in docker container will be added.)
Multiple labs will eventually be working together across geographic boundaries.