Containerizing ONAP for Efficiency: Why Bell created OOM using Kubernetes

19-01-2018 / CloudOps

Companies determined to stay digitally relevant are adopting software-centric architectures. Many are transitioning their IT departments from VMs to containers in order to orchestrate workloads in an agile fashion. Although VMs have brought some degree of agility and flexibility by abstracting the computer system into virtual machine images, they also bring with them some of the limitations of traditional operating system (OS) per VM, such as their resource-heavy and monolithic architecture. Containers, however, alleviate that dependency by allowing developers to abstract the OS in order to create fully autonomous and loosely coupled applications that can be deployed, tested, and updated individually. Kubernetes in turn groups the containers that make up applications into pods, which are logical units that perform specific functions, facilitate management and discovery and allow Docker containers to be managed at scale. As open source technologies, Docker and Kubernetes provide long-term viability by standardizing application design into modular and portable microservices that can be deployed across multiple cloud platforms. Replatforming applications using container technologies is therefore proving to be necessary for any organization, regardless of its size, especially if it wishes to remain competitive in the current fast-paced, highly dynamic and increasingly competitive environment.

Bell Canada takes containers to an entirely new level by being the first telecommunications provider to deploy an open source version of the Open Network Automation Platform (ONAP) to automate its data centre network provisioning. ONAP delivers capabilities for the design, creation, orchestration, monitoring, and life-cycle management of Virtual Network Functions (VNFs) and Software Defined Networking (SDN), simultaneously bringing agility and significant cost-reduction to network services. The ONAP control plane requires significant investment in infrastructure and resources to deploy and run. Bell helped create a new way of running and operating ONAP called the ONAP Operations Management (OOM) platform, which repackages the ONAP control plane instances into containers using Kubernetes. By accomplishing this feat, Bell’s OOM platform fills the need for a consistent, platform-wide method to manage software components and reduce the size of their resources by eighty percent. Moreover, it improves efficiency, speed, portability, level of cloud independence, number of ONAP instances and automation within their operations.

Efficiency – Bell’s creation of OOM renders ONAP substantially more compact. To illustrate, a VM-based deployment of ONAP requires 29 VMs, 148 vCPUs, 336 GB RAM, 3 TB storage, and 29 floating IP addresses. In contrast, the same deployment method under OOM requires only 16 VMs, 52 vCPUs, 152 GB RAM, 980 GB storage, and 15 floating IP addresses. Furthermore, the majority of space within OOM is used by Data Collection, Analytics and Events (DCAE) components, which still run on VMs. DCAE components themselves currently use 15 VMs, 44 vCPUs, 88 GB RAM, 880 GB storage and 15 floating IP addresses. Replatforming reduces the various types of equipment needed and the delta significantly lowers CapEx and operational complexity. OOM intends to offer service providers a single dashboard and user interface (UI) from which to operate the entire or partial ONAP platform, view instances being managed and the state of each of their components, as well as, monitor actions within a control loop to trigger corrective actions.

Speed – Greater efficiency undoubtedly translates to greater speed and Bell’s replatforming results in a faster time to market. Being up to ten times lighter than VMs, containers also boot up to ten times faster. With OOM, ONAP can be deployed in less than eight minutes while the same deployment would easily take tens of minutes on a VM. Subsequent lifecycle management steps are therefore expedited, lowering OpEx.

Cloud Independence – Avoiding vendor lock-in by maintaining portable applications is crucial for any enterprise seeking to remain competitive and innovative. By sharing the main operating system on the physical machine, containers allow applications to be fully portable between servers and cloud providers. Furthermore, they can be deployed on bare metal servers, virtual servers, private clouds or public clouds, thereby providing an enterprise with more flexibility and the ability to scale and innovate autonomously. Bell’s replatforming decouples the network function in software from the support infrastructure in hardware, which allows ONAP to be easily migrated across clouds.

Multiple ONAP Instances – The creation of container pods makes it easier to isolate applications, making them easier to create, modify or tear down depending on the circumstantial requirements. While ONAP enables the entire platform to be deployed at once, OOM responds to the ensuing challenges of duplicate containers and container dependencies. This results in multiple Kubernetes deployment specifications, which allow the same Kubernetes cluster to develop, test, stage and produce environments at once, thereby minimizing deployment times and reducing CapEx by optimizing resources.

Richer Automation – Automation is a significant advantage that Kubernetes offers over VM-based orchestration tools. For example, Kubernetes enables services to communicate with each other using Namespaces, enabling the underlying pods that provide the services to scale horizontally without needing additional IP configuration to deliver the added capacity. Thus, although Kubernetes services are still tied to IP addresses, their enhanced communication facilitates a much greater amount of automation.

In conclusion, the success of Bell’s replatforming serves as an example for telcos seeking to leverage the power of open source and choosing to own their destiny in the cloud. OOM’s streamlining of ONAP using Kubernetes showcases the momentum of open source technologies and the advantages of container platforms. Furthermore, it is a testament to the breadth of use cases for Kubernetes deployments, both in the type of organisation and workload involved. Whether an enterprise, a large organization, an SMB, or an agile one, and whatever the workload, replatforming to containers has become the baseline.

Cloud networking infrastructure offers numerous benefits, but it also poses challenges that require a high degree of technical knowledge and experience. The migration to a fully virtualized network platform must coexist with the use of custom hardware-based network platforms coupled with a transition from prior BSS and OSS practices to DevOps approaches. Another important issue is the realization of adequate levels of security against attack and misconfiguration to hardware and software failures. Finally, vendor lock-in is a concern, and companies seeking to migrate their services among providers must successfully integrate hardware and virtual appliances from a variety of different vendors. Technical expertise in leveraging and deploying applications using Kubernetes is a must, which can be acquired by attending relevant workshops and trainings.

CloudOps is proud of its involvement in communities, and is a Kubernetes Certified Service Provider as well as a member of the Linux Foundation, the CNCF, and the LFN.

References:

OOM Solution Brief: ONAP Operations Manager Utilizes Kubernetes for ONAP Lifecycle Management.
https://www.onap.org/wp-content/uploads/sites/20/2017/12/ONAP_CaseSolution_OOM_1217.pdf

AT&T Vision Alignment Challenge Technology Survey: Domain 2.0, 2013.
https://www.att.com/Common/about_us/pdf/AT&T%20Domain%202.0%20Vision%20White%20Paper.pdf

Lee Forkenbroc, ‘Why You Should Make the Move to Docker Containers from VM’, LoadSys, 2017.
https://www.loadsys.com/why-you-should-make-the-move-to-docker-containers/

New call-to-action