Lost at Sea: Navigating the Complexities of Kubernetes
A Comparative Summary of Kubernetes Tooling: Open Source, Distributions, and Managed Services
While the efficiency and long-term viability of Kubernetes are apparent, its reputation for complexity is not unfounded. To that end, CloudOps has compiled a summary of the most well-known tooling to implement and manage Kubernetes clusters. Facilitating the deployment of containers at scale, Kubernetes allows application design to be standardized into modular and portable microservices that can be easily deployed over multiple cloud environments. Organizations seeking to leverage the capabilities of Kubernetes can look to a variety of open source tooling, distributions, and managed services. Each has its own respective merits.
Generically referred to as ‘vanilla’ Kubernetes, open source tooling puts the entirety of the container deployment in the hands of your internal Operations team. The success of the deployment is consequently dependent on their expertise. An experienced team will benefit from the flexibility provided and plan the deployment around their application. They will know how to manage version upgrades and, through contributing to the source code, add features suited to their application’s respective schedule and requirements. In addition to not paying a licensing cost, open source tooling can provide you with full control over the destiny of your container deployment.
However, an inexperienced Operations team might struggle with the available options when deploying with open source tooling. Kubespray is known for lengthy deployment times, Kops is only available through AWS, and Kubeadm provides no High Availability (HA) option. While the options for open source Kubernetes installations are neither difficult nor complex in of themselves, the sheer number of configuration and deployment options has made Kubernetes infamously difficult to set up for production use. Furthermore, the lack of enterprise features, especially Lightweight Directory Access Protocol (LDAP) support and Role Based Access Control (RBAC), might limit the ability of some organizations to adopt an open source strategy. While controlling your own destiny is important, your organization must be prepared to navigate the complexity and long-term operability of open source tools.
Distributions suggest a compromise between the flexibility of open source tooling and the ease offered by managed services. They still require an internal Operations team to oversee the deployment, but they simplify the process of adopting Kubernetes by presenting opinionated tools for building and managing clusters. Distribution vendors often provide complete platforms that define processes for running builds and tests, creating images, deploying, and staging production. Priced support contracts are generally available in addition to value-added features, such as LDAP support with RBAC. This, in turn, does increase the risk of vendor lock-in. While distributions do limit the potential to customize your experience (exempting the management of version upgrades), they allow developers to automate container operations much more quickly. Below are a few well known distributions available today.
Red Hat OpenShift – Delivered as an opinionated PaaS built on top of a Kubernetes infrastructure, Red Hat OpenShift offers more than other distributions. It is a full platform solution that oversees all aspects of the software development life cycle including access control, building code, running tests, creating and uploading images to an image repository, and deploying published images and application clusters. The entire stack has guaranteed interoperability between the OS (RHEL), orchestration layer (Kubernetes), and runtime (Docker). Updates are validated and released in batches, which ensures cohesion, but can result in feature lag. Users can choose between an ‘Online’ option (grants you access to a tenant in their deployment), a ‘Dedicated’ option (they manage it for you), a ‘Container Platform’ option (allows its deployment in your data center and includes supported storage integration in the form of Gluster as a paid add-on), and the ‘Origin’ option (open source version). Overall, Red Hat provides an extremely stable enterprise offering that is both consistent and easy to manage.
Rancher – Rancher 1.0 is deterministic in deployment and lightweight in installation. It is open source and requires no support contract. The useable and manageable platform lends itself to simplicity and flexibility that, along with the easy management of multiple Kubernetes clusters, make it ideal for straightforward infrastructures. Furthermore, it provides Active Directory (AD), LDAP, and Security Assurance Markup Language (SAML) support. It was designed to be agnostic at the orchestration layer and sought to bring together individuals using Kubernetes, Docker Swarm, and Rancher’s own Cattle. Having announced the planned release of Rancher 2.0 this past September, we can expect a more comprehensive solution that uses Kubernetes as its sole container orchestrator in the future.
Tectonic – Tectonic employs Kubernetes and the CoreOS stack to run Linux containers. Backed by CoreOS, it enables the user to leverage CoreOS Container Linux, a lightweight container operating system. Tectonic also supports Quay Enterprise, a multi-tenant container registry with image vulnerability scanning. A monitoring stack is included within the core product for improved operational visibility. While Tectonic deploys a ‘vanilla’-like form of Kubernetes, it includes added enterprise features, such as AD and LDAP support. Additionally, no support contract is required for small deployments of up to ten nodes. CoreOS was recently acquired by Red Hat for $250 million demonstrating the strength of Kubernetes in driving container-based applications.
Canonical – Canonical offers an opinionated deployment that utilizes Ubuntu for the entirety of its node configuration. While AD and LDAP support are provided, upgrades are non-trivial. Like Tectonic, Canonical deploys a ‘vanilla’-like form of Kubernetes with a few enterprise features. In addition to a Kubernetes distribution, Canonical also offers managed Kubernetes services that can run in either your data centers or in public clouds. Canonical has a partnership with Google to allow GKE worker nodes to leverage Canonical’s Kubernetes distribution, enabling a fully managed offering that includes both master nodes (Google) and worker nodes (Canonical).
Managed Kubernetes offerings enable enterprises to entrust the container orchestration to the service provider within the security of an SLA. While they force you to adapt your application to the service, they facilitate the process by offering in-depth services that vary amongst providers. Most managed Kubernetes services provide and operate master nodes in addition to service integrations, such as ingress controllers, storage, image registry, and identity management. Many also offer container optimized operating systems for worker nodes. Public clouds leverage their existing resources to provide infrastructure, which additionally removes the need to purchase and maintain hardware. They simplify and expedite the process of installing and managing containers.
Managed services inform leaner deployments and smaller, more focused Operations and DevOps teams. However, flexibility is sacrificed because you are forced to adapt your application life-cycle to the dictates of the service. As master nodes are automatically upgraded (roughly every three months), worker nodes must be kept current (usually within two versions) to avoid becoming unsupported. Version upgrades can furthermore introduce feature changes that your application is not yet ready to support. Likewise, dependence on certain features can increase the chance of vendor lock-in. Managed Kubernetes can ease the installation and management of Kubernetes itself, but understand how the limitations could impact your business.
Google Kubernetes Engine (GKE) – GKE was the original managed Kubernetes service in the market and, as such, has the most mature offering. Kubernetes was open sourced by Google and they consequently contribute more than anyone to the source code. GKE goes beyond the standard and expected features to include, for example, the automatic upgrade and autoscaling of worker nodes through their administration portal and integrated cloud service features, such as ingress controllers for load balancing, firewalling, and visibility features, such as monitoring and logging. Management of the master nodes is done by Google as part of the GKE offering and, while you don’t have access to manipulate the nodes according to your needs, you also aren’t charged for their computing resources. Additionally, you can trust that the master nodes are deployed in HA with an SLA. GKE works well with both Google Cloud Storage and Google’s identity management, and allows easy integration with other Google Cloud services.
Amazon Elastic Container Service for Kubernetes (EKS) – While newer to the market, Amazon’s Kubernetes service is expected to eventually have equivalent functionality to Google’s GKE offering. AWS is, generally speaking, the most mature cloud offering on the market with extensive value-added services and integrations available. It is only a matter of time before the EKS offering is able to fully leverage this extensive ecosystem of services to deliver a fully integrated solution. EKS is an AWS managed Kubernetes deployment offering seamless integration with AWS. However, as a cloud-based container service built on a fully proprietary ecosystem, this offering has the potential for vendor lock-in.
Microsoft Azure Container Service (AKS) – Also new to the market, AKS is still establishing differentiated value. Like EKS, only time will tell how well the service adapts to the market and develops their offering. If you are already using Microsoft’s Azure services, AKS is an obvious offering to evaluate and consider. Given how new both EKS and AKS are, it is difficult to compare them with GKE, which has an obvious lead in this space. Expect Microsoft to make a big push with this service, and it will be one to watch going forward.
Navigating the Open Seas
Kubernetes has proven itself to be a robust and reliable technology that will increase the agility and efficiency within your organization. While its installation and management are known to be complex, there are numerous tools that will allow you to leverage this technology. ‘Vanilla’ deployments offer flexibility and the potential to truly own your destiny. However, the operational intricacy can be overwhelming. Distributions manage platform architectures and dependencies, thereby allowing developers to push application code to source control repositories more quickly. They prescribe the deployment. Managed services assume total responsibility for the operation of the Kubernetes management layer enabling developers to quickly develop, deploy, and scale cloud applications with on-demand clusters. The process is easier, but flexibility is limited. An in-depth knowledge of the intricacies involved is essential to understanding the ways in which Kubernetes’ various tools interact with applications.
As a newly Certified Kubernetes Service Provider, CloudOps offers hands-on workshops and trainings that focus on learning how to deploy Docker and Kubernetes technologies on public clouds, including on GCP. For those who are unsure about which route to take, we perform Application Platform Assessments that evaluate the business and technical needs of your organization to help define the ideal solution. While the process can seem overwhelming, navigating the complexities of Kubernetes can nonetheless take your digital enterprise to the next level.
About the Author – Will Stevens
As CloudOps’ first employee and current CTO, Will Stevens has experienced the significant shift of organizations adopting the cloud, both from a technical and business value perspective. With a background in development, Will has worked with multiple customers to facilitate the consumption of cloud services, as well as with service providers to deliver cloud solutions. Will was the VP of Apache CloudStack in 2016 and is an avid open source advocate.