When Jordan Liggitt at Google posted details of a serious Kubernetes vulnerability in November 2018, it was a wake-up call for security teams ignoring the risks that came with adopting a cloud-native infrastructure without putting security at the heart of the whole endeavor.
For such a significant milestone in Kubernetes history, the vulnerability didn’t have a suitably alarming name comparable to the likes of Spectre, Heartbleed or the Linux Kernel’s recent SACK Panic; it was simply a CVE post on the Kubernetes GitHub repo. But CVE-2018-1002105 was a privilege escalation vulnerability that enabled a normal user to steal data from any container in a cluster. It even enabled an unauthorized user to create an unapproved service on Kubernetes, run the service in a default configuration, and inject malicious code into that service.
The first approach took advantage of pod exec/attach/portforward privileges to make a user a cluster-admin. The second method was possible as a bad actor could use the Kubernetes API server – essentially the front-end of Kubernetes through which all other components interact – to establish a connection to a back-end server and use the same connection. Crucially, this meant that the attacker could use the connection’s established TLS credentials to create their own service instances.
This was perfect privilege escalation in action, as any requests were made through an established and trusted connection and therefore didn’t appear in either the Kubernetes API server audit logs or server log. While they were theoretically visible in kubelet or aggregated API server logs, they wouldn’t appear any different to an authorized request, blending in se ..