There has been a lot of chatter recently about whether Kubernetes (hereby referred to as k8s) has actually delivered its users to the promised land, as was widely suggested at the time.
I think the answer is… it’s more nuanced than this tweet suggests, which is perhaps true of all tweets! I helped launch Tanzu at VMware in 2019, specifically k8s integrated into vSphere, which is now touted by VMware’s new overlord Broadcom as an important focus area for the business. Having seen both on-prem customers at VMware and cloud-native ones at Google Cloud, I have some observations.
Let’s start with on-prem. At VMware, k8s was intended to let developers provision and manage their own containers/VMs without having to consume a new, platform-specific API. For the platform operators (or IT administrators), the big sell was oversight and control over resource consumption. We summarized the value proposition as
- Availability of a widely adopted API surface that is industry-standard and relevant
- Portability across clouds, i.e. in theory, run your clusters anywhere without being tied to a specific cloud provider
For the most part, customers were sold on the idea. They were going to stay on-prem for a while, even if the long-term plan was to migrate workloads to public cloud (on that note, peep the latest announcement from Uber). In the interim though, they were instructed by their CIOs to ‘modernize’ applications. Using k8s to manage VMs and containers was a no-brainer way to check this box.
What we didn’t bargain for was how difficult it was to set up the infrastructure to run k8s. In the vSphere product team, we spent a lot of time on day-0 enablement. How do you enable customers to set up their hosts, clusters, networking, and storage correctly in order to have correctly functioning k8s clusters? There were enough subtle differences between customer infrastructure that we couldn’t just automate it all; it required bespoke handcrafting. But the process was error-ridden and every time a customer had to manually debug something, they were that much closer to just giving up. And these customers were VMware experts, not k8s experts. So we had to be a part of this learning journey with our customers and it was extremely challenging.
This is actually where OpenShift got things right, their onboarding experience was excellent and we heard that feedback often from customers. So I wasn’t surprised to see that they hit $1B in ARR recently, props to them.
Between VMware, OpenShift, and perhaps Rancher, the on-prem k8s world is mostly locked down. On the public cloud side as many readers may already know, every major vendor has a k8s product, whether it’s a fully-managed experience (EKS, AKS, GKE, Rancher, D2iQ) or a PaaS/serverless one (AWS Fargate with EKS, GKE Autopilot, Cloud Run with knative).
As an average company, if you want to run and manage containers in a scalable way with high availability and load balancing, your options are ECS, managed k8s or DIY k8s. There’s not a lot of variety here! DIY k8s is hard, ECS locks you in with AWS. What you’re left with is managed k8s or you could go full PaaS/serverless.
So where does this leave us? The industry chose k8s. The open-source community stands behind k8s. If you want containers and want some degree of control over clusters, you need managed k8s. This option if done right could end up being a lot cheaper too. The cloud provider tackles most of the k8s complexity, leaving customers to build and deliver value to their users.
If you want containers and you don’t want to manage them yourself, you can go serverless. You don’t really need to give a crap about k8s. It also doesn’t seem to matter what the cloud provider is using under the hood unless you’re looking for k8s conformance or integrations with the broader k8s ecosystem. But pricing here is opaque and you’re paying for the convenience, so it is likely going to be more expensive.
If you want nothing to do with containers, then the world is your oyster! Just use EC2 or Compute Engine or AWS Firecracker or the new Fly.io with Nomad or whatever VM/micro-VM type notion you want. Due to its popularity, so many people jumped on the k8s train without thinking about whether they actually need it. And if you chose to use a sledgehammer to crack a nut, you can’t blame k8s for that.
Good writeup on the present state of Kubernetes and its future direction. The article gives an excellent summary of the technology and its implications for the industry.