Table of Contents

Getting started with MaaS

MAAS

Metal as a Service

Metal as a Service (MAAS) brings the language of the cloud to physical servers. It makes it easy to set up the hardware on which to deploy any service that needs to scale up and down dynamically; a cloud being just one example.

With a simple web interface, you can add, commission, update, decommission and recycle your servers at will. As your needs change, you can respond rapidly, by adding new nodes and dynamically re-deploying them between services. When the time comes, nodes can be retired for use outside the MAAS.

MAAS works closely with the service orchestration tool Juju to make deploying services fast, reliable, repeatable and scalable. more: https://maas.ubuntu.com/

Note: We are going to utilize MAAS as a part of POD jump-start server so that OS deployment can be handled by MAAS and support Ubuntu, Centos, Windows or customized images (supported by MAAS process).

Currently MAAS supports all Major OEM (HP, Dell, Intel, Cisco, Sea-micro etc.. ) hardware which includes the power management of those hardware through IPMI as well as other power management software.

Video: Managing OPNFV Labs with MAAS Hz729WSEP0Q?.swf

http://www.adobe.com/go/getflashplayer/

MAAS machine requirement:

MAAS regional controller (each lab) :

MAAS Cluster controller (per pod):

More information on how to register a cluster controller to regional controller can be found at https://maas.ubuntu.com/docs/development/cluster-registration.html https://maas.ubuntu.com/docs/development/cluster-bootstrap.html

Installation instruction

Region Controller

Cluster Controller

Reference Links:

Below Example updated for cluster of Hypervisors with Virtual Pods using VLANs. In this example there is 1 lab with 5 virtual pods.

The following items will be needed to add a MaaS cluster to the existing OPNFV MaaS Regional Controller:

MaaS Networks NOTE: It's recommended to include the 4 digit VLAN id in the network name if 802.1q vlans are used. This is just an example.

  1. 1g - Shared Lab External Network for Cluster Controller (LabExt) - typically connected to Firewall WAN port
  2. 1g - Shared Lab Internal Management Network for Lights Out, Cluster Controller, Jumpboxes, UPS, PDU, etc… (LabMGMT) - connect to drac, ilo, ipmi, and eth0 native access port
  3. 1g - 5 x Isolated Pod Lights Out Network VLANs connect to eth1 as a trunk - for virtual pod we can build VMs here.
  4. 1g - 5 x Isolated Pod Public Network VLANs connect to eth1 as a trunk
  5. 1g - 5 x Isolated Pod Admin Network VLANs connect to eth1 as a trunk
  6. 10g - 10 x Pod Private Network VLANs for Compute VMs - connect to eth2 as a trunk
  7. 10g - 10 x Pod Private Network VLANs for Compute VMs - connect to eth3 as a trunk

MaaS Cluster Controller

  1. Ubuntu 14.04 LTS 64 bit machine - can be physical or virtual with public and private mgmt network interfaces
  2. public interface should be reachable from the internet or via vpn from regional controller machine
  3. NAT can also been used - Narinder set this up with Intel. Possibly using a port mapping to port 443???
  4. create a user account for the services to run under. Recommended to call it "ubuntu" with both password and ssh key. We will need a secure method to exchange these keys with GPG signing and https://launchpad.net

Pods

  1. A cluster controller can manage one or more pods
  2. A pod will need one jumphost machine with it's NIC1 on the mgmt private network and a second NIC2 interface for the compute nodes to be setup on the POD
  3. The jumphost will be PXEBOOT DHCP and TFTP server on NIC2 for all the compute nodes of its pod.

MaaS JumpHost

  1. the cluster controller will install and configure the OS on a jumphost on the private mgmt network. This jumphost must be able to reach the internet. The cluster controller can be setup as a gateway to the internet - or - an external gateway provided by the lab may also be used.
  2. The cluster controller will be the DHCP and TFTP PXEBOOT server for the jumphosts.

Compute Nodes

  1. A pod consists of a Jumphost, some control nodes, and some compute nodes
  2. There is a minimum set of hardware resources required to run all the OPNFV services. Be sure the system(s) where you want to run all the needed services (OpenStack, SDN Controller, KVM host, DUT VNFs, and Test Machines)
  3. It's possible to have an all-in-one configuration where MaaS can deploy an entire environment in a single machine.
  4. Recommended Pod configuration 5 machines (this is in addition to the Pod Jumphost, the Lab cluster controller, and the Master OPNFV regional controller, jenkins servers, etc.)
  5. Consult the pharos spec for hardware recommendations. It's possible to use either physical or virtual machines
  6. MaaS uses "power drivers" to support IPMI, VMware, UCS, etc: http://maas.ubuntu.com/docs/api.html see drawing below.

Power Drivers

Proposed MAAS Poc

STATUS: the MaaS Proof of Concept is complete - now moving status to Pilot Project with expanded roll out.

Labs involved with POC: (Don't add to this list - just update and correct it. The POC is over.)

MaaS Pilot

STATUS: the MaaS Pilot Project expands on the POC. POC was completed in November 2015 before the OPNFV Summit. It was agreed to bring in more labs and refine the scope for MaaS within OPNFV labs. There are two main objectives for MaaS:

MaaS Pilot Labs

Labs involved with Pilot: (Please update this list with your interest to participate in the Pilot Project)

Matrix

There are 16 combinations of installers and sdn controllers being considered for OPNFV Release 2. These are reflected in this matrix:

Installer/SDN OVN ODL Helium ODL Lithium Contrail ONOS
Apex OVN Complete WIP WIP ?
Compass OVN Complete ? ? ?
Fuel OVN Complete ? WIP ?
JOID OVN Complete WIP Complete WIP

Above Diagram represent the End state diagram of different labs. Basic idea is to use the community lab to deploy different installer repeatedly and reliably. Which will increase the usage of individual community lab and will be true integration lab with Linux Foundation lab.

Network Requirements

MaaS Workflow Idea

  1. jenkins server kicks off maas workflow job to initiate pod build out in a lab
  2. maas regional controller receives job and sends task to cluster controller for the lab
  3. The cluster controller builds the jump box for OPNFV installer of choice: Fuel, Foreman, RDO, APEX, JOID, COMPASS, etc…
  4. The jumpbox builds out an OPNFV for the compute nodes of the pod with correct OS and SDN controller: Ubuntu, CentOS, Daylight, Contrail, Midonet, etc
  5. FUNCTEST jobs run to validate the environment
  6. A quick QTIP benchmark is run to provide a performance score
  7. More in depth tests can be run as desired: vsperf, storage, yardstick, etc
  8. When testing is complete the servers are erased and the pod is rebuilt with the new parameters

**References**

MAAS POC Slides OCT 12, 2015 : maas.pdf

The original POC work on the original OPNFV Arno release (images and how to build the images) can be found here: http://people.canonical.com/~dduffey/files/OPNFV/