Kubernetes telemetry feature fully compromises clusters

If Kubernetes admins don’t have enough to worry about with the upcoming Nginx gateway cutoff, they now may need to rifle through their Helm charts to potentially thwart a dangerous setting.

Security researcher Graham Helton has shared a Kubernetes vulnerability he unearthed that allows some random user, armed with read-only permission, to run arbitrary and even privileged commands on any pod in a cluster.

His trick is to use a service account with permissions for the Kubernetes nodes/proxy GET resource, which is used by dozens of monitoring tools, and provides access for issuing privileged-level pod commands.

In other words, it’s a feature, not a bug.

Working as intended

Helton initially reported the quirk as a bug in November through the Kubernetes bug bounty program. The issue was soon marked closed, marked as “intended behavior.”

Thenodes/proxy GET call is intended for service accounts, and is used by many monitoring tools.

How a get request gets transformed into a full remote code execution is due to a mismatch between Websockets and the Kublet’s authorization logic.

Helton found Helm charts for 69 tools that usednodes/proxy GET. For them, it provides the permissions to reach a node’s internal API to get the data they need.

“Some of the worlds biggest kubernetes vendors rely on it because there is no generally available alternative,” Helton writes on X.

So, no CVE alert for nodes/proxy Get behavior, because its not a vulnerability.

The official path forward is to use KEP-2862 (“Fine-Grained Kubelet API Authorization”), an extension expected in the upcoming Kubernetes 1.36 release, expected in April.

How to bring down a Kubernetes cluster

So, if you have a service account that’s subscribed to nodes/proxy GET, and can reach a Node’s Kubelet on port 10250, then you are free to issue any command to /exec endpoints, including commands for privileged system pods that could destroy the cluster entirely.

Here are some other things you can do, according to Helton: steal service account tokens from other pods, or execute code in control plane pods.

Worse yet, no record would be left of such malicious actions, as the “Kubernetes AuditPolicy does not log commands executed through a direct connection to the Kubelet’s API,” Helton explains.

Here is the cluster’s permission set that makes this all possible:

If you want to try it out for yourself, Helton posted an entire lab.

Precautions to take?

Hard questions may have to be asked for those with these system settings: Do you value your telemetry more than your security?

Industry observer Alex Ellis calls the disclosure “worrying.”

Cloud native security company Edera field CTO Jed Salazar notes the vulnerability points out how Kubernetes workloads are different 2026 than they were in 2016.

They’re no longer just stateless apps. They’re “AI training pipelines with proprietary model weights, financial trading systems, and healthcare applications with patient data,” he writes. “The blast radius of a monitoring stack compromise in 2026 is categorically different from 2016.”

The answer, Salazar writes, is architectural isolation, which is what Edera offers (the configuration did not leave Edera users vulnerable, Salazar notes).

For everyone else, until KEP-2862 fully trickles down to production, Salazar advised a number of  precautions:

  1. Audit your RBAC policies for nodes/proxy permissions immediately,
  2. Consider whether monitoring tools truly need direct kubelet access,
  3. Implement network policies restricting access to kubelet port 10250,
  4. Plan your migration to KEP-2862 fine-grained permissions when they GA,
  5. Adopt workload isolation technologies that limit blast radius regardless of upstream decisions.


Created with Sketch.


Source: thenewstack.io…

We will be happy to hear your thoughts

Leave a reply

FOR LIFE DEALS
Logo
Register New Account
Compare items
  • Total (0)
Compare
0