Kubeadm V Openshift, who's the build winner?
In any contest of orchestration layers in the cloud, there are many contenders and many winners. After completing the Openshift 4.2 course as a Certified Kubernetes Administrator, I can say I was impressed with IBM Redhat's iteration of Kubernetes. The organisation of the Openshift layer is automated providing a uniform and streamlined user experience for all working on it. For example, the automated deployment of controller operators (abstracted kubeadm 'namespaces' designed for cluster management functions) contains different Kubernetes objects needed to run your cluster. This approach standardises many processes creating a more stable and less error-prone orchestration layer.
So, before I present some comparisons in our build-off, it's worth noting that the Openshift build is a direct abstraction from the original kubeadm build. Let's now compare some key points between them both.
- Kubeadm is a basic Kubernetes build that is totally mutable. I have seen opportunities missed around high availability due to a single Kubernetes controller node (control plane) being deployed. Openshift streamlines this choice in its deployment configuration but by default goes with a highly available option using a bootstrap node, 3 master nodes and 3 worker nodes in a 'H/A' spread. So +1 to Openshift for the proceduralism around architecture where its lower level parent leaves it to the engineer or architect to determine.
- I've seen kubeadm builds go wrong on subnet assignments due to human error mainly around miscommunication of requirements. Openshift automatically determines subnet assignments for nodes in all major cloud platforms and bare metal integrations via its ovs-subnet plugin. This immutable approach by Openshift reduces errors and prescribes safer upper-level pod communications when deployed. +1 to Openshift again on its procedure-based approach using plugins.
- Openshift SDN configures nodes uniformly for operations. br0 is used by the OVS Bridge for pod operations and has configuration rules for this purpose. Port 2 on br0 has tun0, the internal OVS port. It uses Netfilter (Linux module essential to Kubernetes) for cluster communications with the outside via NAT along with rules implementation and subnet interactions. Kubadm requires Netfilter and overlay for bridge comms to be installed by node and configuration is manually implemented via command line or IaC. Finally, on comms, vxlan_sys_4789 is used by Openshift's OVS VXLAN to provide remote node communication via port 1 on br0. This makes remote node communication possible. So Openshift's automation of node comms versus manual configuration for kubeadm makes Openshift the winner in this segment.
- Openshift has a clear advantage on multi-tenant isolation via 'projects' (kubeadm version of namespaces). Openshift's ovs-multitenant plugin provides network isolation for projects. The automated plugin provides VNID headers for all packets associated with non-default projects (namespaces) that go through the OVS bridge at the (management) node level. This is a huge boon for network isolation and allows one cluster to safely render multiple instances, which are logically separated by project. +1 to Openshift on this one too. The kubeadm journey is by far more involved for that level of network isolation in a multi-tenant scenario.
- Openshift requires considerable resources in node count, node size, CPU potential and more when compared to the kubeadm build. Kubeadm builds are lighter and more customizable given their mutability so I guess +1 to kubeadm as it offers more control for the skilled engineer.
- Openshift's administration is far more versatile. For example, Openshift can accept 'oc' or 'kubectl' commands so you can imagine the extra resources required making it heavier when compared to a kubeadm build having the 'kubectl' module only. This choice and considerable automation of many areas for the sysadmin makes Openshift the clear winner.
So, Openshift wins 5 to 1 against a raw kubeadm build when deploying and using Kubernetes. The core technology is very stable and incredibly useful in container orchestration, availability and administration. However, IBM Redhat plus other providers have shown that it can always be improved upon. Stay tuned for more on Cloud Infrastructure in this blog along with articles on other areas of interest in the Writing and DevOps arenas. To not miss out on any updates on my availability, tips on related areas or anything of interest to all, sign up for one of my newsletters in the footer of any page on Maolte. I look forward to us becoming pen pals!