User Tools

Site Tools


get_started:get_started_system_state

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
get_started:get_started_system_state [2015/03/30 23:05]
Trevor Cooper [Hardware setup]
get_started:get_started_system_state [2015/05/01 21:24] (current)
Tim Rozet
Line 3: Line 3:
 This wiki defines the target system state that is created by a successful execution of the BGS. This target system state should be independent from the installer approach taken. ​ This wiki defines the target system state that is created by a successful execution of the BGS. This target system state should be independent from the installer approach taken. ​
  
-===== Hardware setup ===== +===== OPNFV Target System Definition ​=====
- +
-// The Pharos specification describes a "​standard"​ OPNFV deployment environment (compute, network, storage) //+
  
 +The OPNFV Target System is currently defined as OpenStack High Availability (HA) + OpenDaylight Neutron integration,​ across 3 Controller nodes and 2 Compute nodes. ​ The Controller nodes run all OpenStack services outlined below in this wiki except for nova-compute. ​ HA is defined as having at least mySQL and rabbitmq along with all other dependencies (corosync, pacemaker) working in an Active/​Active or Active/​Passive state.
  
 +The full hardware specification can be found outlined by the Pharos project below:
   * [[https://​wiki.opnfv.org/​pharos/​pharos_specification|Pharos Specification]]   * [[https://​wiki.opnfv.org/​pharos/​pharos_specification|Pharos Specification]]
-===== Key Components and associated versions =====+ 
 + 
 +===== Key Software ​Components and associated versions =====
  
 ^ Component Type             ^ Flavor ​       ^ Version ​    ^ Notes                                                 ^ ^ Component Type             ^ Flavor ​       ^ Version ​    ^ Notes                                                 ^
-| Base OS                    | CentOS ​       | 7           ​| ​Including ​current ​updates for all installed packages ​ |+| Base OS                    | CentOS ​       | 7           ​| ​Base OS may vary per Installer. CentOS 7 is the current ​OPNFV standard ​ |
 | SDN Controller ​            | OpenDaylight ​ | Helium SR2  | With Open vSwitch ​                   | | SDN Controller ​            | OpenDaylight ​ | Helium SR2  | With Open vSwitch ​                   |
 | Infrastructure controller ​ | OpenStack ​    | Juno       ​| ​                                                      | | Infrastructure controller ​ | OpenStack ​    | Juno       ​| ​                                                      |
  
  
-==== CentOS 7 ==== +==== Target System Operating System and Installed Packages ​====
-[[http://​www.centos.org/​download/​|CentOS]]+
  
-Install only core components on all servers.  Additional dependencies will be included when specific packages are added.  ​This list is derived from with fuel specific components removed [[https://​review.openstack.org/​gitweb?​p=stackforge/​fuel-main.git;​a=blob;​f=iso/​ks.template;​hb=HEAD|OpenStack Fuel Template]]. ​ Puppet is included since OpenSteak, Foreman ​and Fuel installers are all using Puppet.+Install only core components on all target system nodes.  Additional dependencies will be included when specific packages are added.  ​The list below contains information pertaining to each Installer, its base OS per Controller/Compute node and the extra packages installed on those nodes.
  
  
-Package ​^ Version ^ Note +Installer ​^ Version ^ Jumphost Package List ^ Node Package List ^ OpenStack Services List 
-|authconfig ​| | +Foreman/​QuickStack ​CentOS 7 [[jh_foreman_package_list|Foreman Jumphost Package List]] ​|[[foreman_package_list|Foreman/​QuickStack Package List]] ​[[os_foreman_service_list|Foreman OpenStack Service List]] ​
-|bind-utils ​| | +Fuel | | | | 
-|cronie ​| | +OpenSteak ​| | 
-|crontabs ​| | +
-|curl | | +
-|daemonize ​| | +
-|dhcp | | +
-|gdisk | | +
-|lrzip | | +
-|lsof | | +
-|man | | +
-|mlocate | | +
-|nmap-ncat | | +
-|ntp | | +
-|openssh-clients | | +
-|policycoreutils | | +
-|rsync | | +
-|ruby21-puppet | | +
-|ruby21-rubygem-netaddr | | +
-|ruby21-rubygem-openstack | | +
-|selinux-policy-targeted | | +
-|strace | | +
-|subscription-manager | | +
-|sysstat | | +
-|system-config-firewall-base | | +
-|tcpdump | | +
-|telnet | | +
-|vim-enhanced | | +
-|virt-what | | +
-|wget | | +
-|yum | |+
  
 ==== OpenStack Juno ==== ==== OpenStack Juno ====
 OpenStack Juno Components [[http://​www.openstack.org/​software/​juno/​|OpenStack Juno]] OpenStack Juno Components [[http://​www.openstack.org/​software/​juno/​|OpenStack Juno]]
  
-^ Component ^ Package ​^ Version ^ Notes ^ +^ Component ^ Required? ​^ Version ^ Notes ^ 
-| Nova | | Juno | |  +| Nova | Yes | Juno | |  
-| Glance ​ | | Juno | |  +| Glance ​ | Yes | Juno | |  
-| Neutron ​ | | Juno | |  +| Neutron ​ | Yes | Juno | |  
-| Keystone ​ | | Juno | |  +| Keystone ​ | Yes | Juno | |  
-| MySQL  | | Juno | |  +| MySQL  | Yes | Juno | Must be HA |  
-| RabbitMQ ​ | | Juno | |  +| RabbitMQ ​ | Yes | Juno | Must be HA |  
-| Pacemaker cluster stack | | Juno | |   +| Pacemaker cluster stack | Yes | Juno | Required for HA |   
-| Corosync ​ | | Juno | |  +| Corosync ​ | Yes | Juno | Required for HA |  
-| Ceilometer ​ | | Juno | Possibility to drop from first release ​|  +| Ceilometer ​ | No | Juno | |  
-| Horizon ​ | | Juno | Possibility to drop from first release|  +| Horizon ​ | Yes | Juno | |  
-| Heat  | | Juno | Possibility to drop from first release|  +| Heat  | No | Juno |  |  
-Tempest ​| | Juno | What about documenting on the CI page and not the getting started page? |  +Swift No | Juno | |  
-Robot | | Juno | What about documenting on the CI page and not the getting started page? +Cinder ​Yes | Juno | Required to use Ceph Storage as Cinder backend ​ 
 + 
 +==== OPNFV Storage Requirements ==== 
 +Current requirements are that Cinder will be used for block storage using Ceph.  There is currently no requirement for external, dedicated storage. ​ Storage is to be implemented as a Ceph storage pool of multiple OSDs for HA along with several Ceph Monitors. ​ The standard implementation is 3 Ceph OSDs and 3 Ceph Monitors, 1 OSD/Mon per Controller. ​ The Controller node's internal hard drives will be used for redundant storage.
  
 ==== OpenDaylight Helium SR2 ==== ==== OpenDaylight Helium SR2 ====
Line 97: Line 74:
 | |odl-akka-all ​                                    | 1.4.4-Helium-SR2 ​           | OpenDaylight :: Akka :: All| | |odl-akka-all ​                                    | 1.4.4-Helium-SR2 ​           | OpenDaylight :: Akka :: All|
  
-==== Additional Components ====+==== Additional Components ​and Software ​====
 ^ Component ^ Package ^ Version ^ Notes ^ ^ Component ^ Package ^ Version ^ Notes ^
 | Hypervisor: KVM  |  | | |  | Hypervisor: KVM  |  | | | 
Line 104: Line 81:
 | Example VNF1: Linux | Centos |7 | | | Example VNF1: Linux | Centos |7 | |
 | Example VNF2: OpenWRT | | version 14.07 – barrier braker) | | | Example VNF2: OpenWRT | | version 14.07 – barrier braker) | |
 +| Container Provider: Docker | docker.io (lxc-docker) | latest | FUEL delivery of ODL |
 +
 ===== Network setup ===== ===== Network setup =====
  
 // Describe which L2 segments are configured (i.e. for management, control, use by client VNFs, etc.), how these segments are realized (e.g. VXLAN between OVSs) and which segment numbering (e.g. VLAN IDs, VXLAN IDs) are used. Describe which IP addresses are used, which DNS entries (if any are configured),​ default gateways, etc. Describe if/how segments are interconnected etc. // // Describe which L2 segments are configured (i.e. for management, control, use by client VNFs, etc.), how these segments are realized (e.g. VXLAN between OVSs) and which segment numbering (e.g. VLAN IDs, VXLAN IDs) are used. Describe which IP addresses are used, which DNS entries (if any are configured),​ default gateways, etc. Describe if/how segments are interconnected etc. //
  
-See details on the corresponding Pharos wiki: [[https://wiki.opnfv.org/pharos/pharos_specification]] +List and purpose of used subnets as defined here: 
-===== Storage setup ===== +[[get_started:networkingblueprint|Network addressing and topology blueprint - FOREMAN]] 
-Local storage will be used.+  * Admin (Management) - //192.168.0.0/24// - Admin network for PXE Boot, node configuration via puppet 
 +  * Private (Control) - //​192.168.11.0/​24//​ - API traffic and inner-tenant communication  
 +  * Storage - //​192.168.12.0/​24//​ - separate VLAN for storage 
 +  * Public (Traffic) - management IP of Ostack/ODL + traffic 
 + 
 +[[get_started:​networkingblueprint|Network addressing and topology blueprint - FUEL]] 
 +  * Admin (PXE) - 10.20.0.0/​16 ​ - FUEL Admin network (PXE Boot, cobbler/​nailgun/​mcollective work) 
 +  * Public (Tagged VLAN) - Subnet is up to the users network - this is used for external communication of the Control nodes as well as for L3 Nating (configurable subnet range) 
 +  * Storage (Tagged VLAN)) - 192.168.1.0/​24 (default) 
 +  * MGMT (Tagged VLAN) - 192.168.0.0/​24 (default) used for Openstack Communication  
 +  
 + 
 + 
 +Currently there are two approaches to VLAN tagging: 
 +  * //Fuel// - tagging/​untagging is done on the Linux hosts, the switch should be configured to pass tagged traffic.  
 +  * //Foreman// - VLANs are configured on switch and packets are coming to/from Linux hosts untagged. 
 +**There was agreed not use VLAN tagging unless the target hardware lacked the appropriate number of interfaces. ​ It is still viable however for users who want to implement the target system in a restricted hardware environment to use tagging.** 
 + 
 +Following picture shows ODL connects to Neutron through ML2 pugin and to nova-compute through OVS bridge. (//not yet finished, ceph storage will be added, approach with ODL in docker container will be added.//) 
 +{{:​get_started:​ostack_odl_interconnection.png?​800|}} 
 + 
 + 
 +==== Additional Environment Requirements (for operation) ​===== 
 +  
 +   - access to a valid NTP server 
 +   - access to a valid DNS server (or relay) 
 +   - Web Browser with access to ADMIN network for HTTP based access 
 + 
 ===== NTP setup ===== ===== NTP setup =====
 Multiple labs will eventually be working together across geographic boundaries. Multiple labs will eventually be working together across geographic boundaries.
Line 117: Line 124:
   * Centralized logging should be configured with UTC timezone   * Centralized logging should be configured with UTC timezone
  
-// Describe the detailed setup //  +  * // Describe the detailed setup // 
- +    * https://​wiki.opnfv.org/​pharos ​
get_started/get_started_system_state.1427756734.txt.gz · Last modified: 2015/03/30 23:05 by Trevor Cooper