2019 was a big conference year for me.
I kicked things off with VMworld US in San Francisco and followed up with VMworld EU in Barcelona. Both of these conferences were monumental because we announced Project Pacific at the keynote in San Francisco and did several deep-dive sessions in Barcelona. I spoke to over a hundred customers at both events and the excitement for this thing to launch is palpable.
Most recently though, I was at KubeCon in San Diego. This was my second KubeCon, I went to my first in Seattle last year and that was a bit of a blur because I barely knew anything about Kubernetes at that point. This time, I was more strategic about the sessions I attended, and more importantly, I made a real attempt to meet people in the community.
I thought I would do a brief round-up of all the days and sessions at KubeCon that I attended. Sadly, I couldn’t attend any of the talks on Thursday since I flew out early, but I learnt so much in all the other days that I’m not even mad about it. As I started this blog post, it also ended up being much longer than expected so I’m going to split it into multiple posts by day. All these sessions were recorded and will be shared publicly. Also I’d like to note that this blog represents my interpretation of the sessions presented, so any mistakes made are my own and I’d encourage you to watch these excellent speakers yourself!
Here’s Monday or Day 1. Day 0 was the New Contributor Summit but more on that later.
How the Department of Defense Moved to Kubernetes and Istio – Nicolas Chaillan, Department of Defense
- This was a cool talk about running Kubernetes in fighter jet planes
- One of their biggest challenges of course is security and they have invested in creating an internal artifacts repository of hardened and accredited containers. They use Twistlock for continuous scanning and alerting. Per my understanding, Twistlock supplements Harbor’s basic vulnerability management capabilities by including >30 upstream providers for greater coverage and goes beyond just vulnerability scanning to look for compliance and configurations risks. They’ve also baked in zero trust security using the Sidecar container security stack
- They’re using Istio as their service mesh and Open Policy Agent (OPA) to apply policies
- Vendor lock-in appeared to be a large concern for them, which is why they leverage Knative for a serverless/FaaS experience.
- I also liked this diagram that shows the various layers of their stack and the tools used at each layer
A Peek Inside the Enterprise Cloud at Salesforce – Xiao Zhou & Thomas Hargrove, Salesforce
- This talk focused on providing k8s to internal customers and was primarily focused on developer productivity
- The biggest areas of focus for them to provide k8s at scale seemed to be
- Multi-tenancy and security
- Deployment management
- Orchestrated production visibility
- Testing, monitoring and alerting
- Multi-tenancy & security:
- They use shared clusters with tenant protections
- mTLS communication with internal service
- OPA
- RBAC to limit namespace access
- Internal secret management instead of k8s secrets
- Detections for malicious behavior
- Container scanning
- Code signing
- Automated patching
- Service mesh
- They use shared clusters with tenant protections
- Orchestrated production visibility:
- This was a pretty nifty solution that I wish I’d taken a picture of, but they basically dump all k8s events and logs into a database that looks like an API server (but isn’t) so developers can run kubectl commands against it just like they normally would
KubeVirt Intro: Virtual Machine Management on Kubernetes – Steve Gordon, Red Hat & Chandrakanth Jakkidi, F5
- This was a highly relevant talk – I’ve been looking at this project for fairly selfish reasons (Pacific). The use cases that KubeVirt aims to support are
- To run VMs to support new development (apps that rely on existing VM apps and APIs, manage VMs through Kubernetes APIs)
- To run VMs to support apps that can’t lift and shift (older apps, specific vendor appliances)
- This is really not that dissimilar from what we’d like to do with Project Pacific, although our product charter is definitely a lot larger than these two use cases
- The main point though is that VM workloads aren’t going away any time soon, and there will be workloads that may never be containerized at least in the near term
- VNFs (virtual network functions) or as we call it at VMware NFV (network function virtualization) is a prime target for KubeVirt since these are difficult to containerize but could greatly leverage the compute and management capabilities of Kubernetes
Walls Within Walls: What if Your Attacker Knows Parkour? – Tim Allclair & Greg Castle, Google
- Okay, this session was probably one of the most interesting sessions I attended at the conference. Kudos to Tim and Greg for making what can be a dull topic really fun!
- I am going to butcher the explanation so you should really just watch this session yourself. But they demonstrated how isolating pods into different nodes isn’t a foolproof solution since pods can still escape. How? They created a static pod, assigned it the same label as an actual pod and made the API master think this “fake” pod was a real pod, and then scheduled this devious fake pod on the same node
- The key takeaway for me was that container breakouts are a real thing. Scheduling pods on separate nodes can help but is not a guarantee. What can help is stronger pod isolation using gVisor or Kata containers (or a VM per pod) so that there is no shared kernel
That was my first day in a nutshell! Blog post about day 2 coming soon.