Moving away from ingress-nginx: F5 NGINX Ingress Controller

Context

In November 2025, the Kubernetes SIG Network and Security Response Committee announced the retirement of ingress-nginx, the widely beloved Ingress Controller (IC) responsible for a good chunk of Ingress handling in Kubernetes. Its Gateway API successor InGate never got enough momentum, and development also stopped for it.

With this retirement happening in March 2026, this is a tight window for most teams, causing a scrambling for alternatives. Some highlights are Chainguard’s continued maintenance of the solution through their EmeritOSS program and Traefik’s strong marketing campaign and handy migration tool.

For my specific context at work, with a very heterogenous environment with 400+ clusters in multiple cloud providers, we would like to go forward to Gateway API, with Cilium. However…

Why not Cilium Gateway API?

Cilium is a very powerful networking tool that implements the Container Network Interface (CNI) standard, serving as a highly scalable ‘backbone’ for the networking side of Kubernetes. My team has adopted it as its chosen CNI, and it has been deployed for a few years and fulfilling its purpose very well. Of course, when Gateway API first showed up, Cilium was one of the obvious choices to investigate.

At that time (Nov ‘24), Gateway API spec was still relatively early in development (v1.2), but already had a wide array of implementations - Traefik, Kong, Istio, kgateway, etc. With Cilium being used as CNI, it was the easiest solution to adapt and was quickly provided as a feature flag.

Adoption was in the single digit % in the last year and a bit, and looking at it now, migrating fully is still not viable. The Cilium team mostly tries to stick to the upstream spec for Gateway API, and this comes with multiple restrictions:

1) Authentication/Authorization (see GEP-1494)

2) Self-servicing Certificates via cert-manager (see GEP-1713)

3) TCP/UDP routes are not implemented in Cilium yet, even though they are in upstream GW API

Envoy Gateway also uses Envoy underneath, like Cilium’s Gateway API implementation, and could be an option to explore. Still, we decided to move forward with an Ingress Controller, and that exploration eventually led us to decide on the F5 NGINX.

F5 NGINX IC: what it is, why pick it?

The company that owns NGINX, F5, has maintained an Ingress Controller based on it for a very long time. Confusingly, it’s named nginx-ingress-controller, which makes SEO terrible and will make you confuse it with the ingress-nginx maintained in the Kubernetes organization.

Both implement the same Ingress API spec, write NGINX configuration files, and run an NGINX binary, but the F5 controller comes with an optional “NGINX Plus” closed-source binary that you pay for, and includes some additional neat features like using cookies for session persistence and API-driven reloads instead of the traditional SIGHUP-based config reloads.

A huge plus of these similarities is that if you’re doing something very specific and want it to keep working, you can “crack open the hood” by going inside an ingress-nginx pod and try to copy those configs over to the new IC.

These similarities led us to bet on this tool as opposed to Traefik, HAProxy or others, and we quickly migrated all our own workloads to it.

One of our focuses was to try to do the migration avoiding CRDs entirely, and using just the Ingress object itself and related annotations. As this will end up being an intermediary step before what I hope is a more definitive move to Gateway API in the future, we did not want to force users to learn new resources if they didn’t have to.

A huge feature of this IC is the ability to define custom annotations and a custom NGINX configuration template. If you’re afraid of fully opening up NGINX configs with snippets, creating your own curated annotations and blocking snippets with something like Kyverno may be an easier way out.

In this blog post I’ll go over a few of the biggest issues we faced during the migration and how we solved them.

Authentication/Authorization

We use oauth2-proxy extensively, and together with Keycloak it forms the authentication layer protecting a good chunk of our Ingresses. Some tools like Grafana have direct integration with Keycloak, but others like Prometheus or the Kubernetes Dashboard don’t.

This was the most important part of migrating for us, and unfortunately it requires some ugly code 😭. Here we follow the reference documentation adapted for our experience.

Defaults and global configs

First, we set some sane defaults in the ConfigMap. Not setting these caused issues, even if we set them in each of the Ingresses we migrated.

  proxy-buffer-size: 128k
  proxy-buffers: 8 128k
  proxy-busy-buffers-size: 256k

OAuth2Proxy & Keycloak

Both oauth2proxy and Keycloak (our selected IAM tool) have their own Ingresses. Naturally, they are not protected by themselves 😊. Keycloak configurations are mostly out-of-the-box supported, note the replacement of nginx.ingress.kubernetes.io/app-root by a location-wide snippet:

    nginx.org/location-snippets: |
      location = / {
        return 301 /auth/;
      }
    nginx.org/ssl-services: keycloak

And for the oauth2proxy ingress itself, we need no configuration.

Protecting each Ingress

Then, the configuration that was previously handled by the elegant nginx.ingress.kubernetes.io/auth-signin and auth-url annotations needs to be written with snippets. If you consider giving end users access to snippets a security risk, please consider going to Traefik and using its ForwardAuth middleware.

    annotations:
      nginx.org/location-snippets: |
        auth_request /_oauth2_validate;
        auth_request_set $auth_cookie $upstream_http_set_cookie;
        add_header Set-Cookie $auth_cookie;
        error_page 401 = @oauth2_redirect;
      nginx.org/server-snippets: |
        location = /_oauth2_validate {
          internal;
          proxy_pass http://oauth2-proxy.\{\{ .Values.oauth_namespace \}\}.svc.cluster.local/oauth2/auth;
          proxy_pass_request_body off;
          proxy_set_header Content-Length "";
          proxy_set_header X-Original-URI $request_uri;
          proxy_set_header X-Original-Method $request_method;
          proxy_set_header Host $host;
        }
        location @oauth2_redirect {
          internal;
          add_header Set-Cookie $auth_cookie;
          return 302 https://oauth2-proxy.apps.\{\{ .Values.dns_subdomain \}\}/oauth2/start?rd=$scheme://$http_host$request_uri;
        }

In the above snippet, we have one instance of oauth2proxy installed in the namespace identified by .Values.oauth_namespace, and publicly available to our users at https://oauth2-proxy.apps.. If you’re using any kind of network policies/filters, make sure that the namespace where the F5 NGINX IC is installed can reach the oauth_namespace.

We could only get things working properly with Redis as the storage for oauth2proxy’s sessions, as huge cookies of 4+ KB are a huge pain in oauth2-proxy, involving ugly cookie splitting and joining configs inside the snippets. We strongly recommend doing the same if you’re facing similar issues.

Basic Auth

Some of our Ingresses (e.g. Prometheus /federate endpoint) were using HTTP Basic Auth. It works in a very similar way, but needs a specific secret type nginx.org/htpasswd, and the secret contents must be named htpasswd! If you were using a Secret for this before, you will need to migrate to something like htis before using the annotation nginx.org/basic-auth-secret:

apiVersion: v1
kind: Secret
metadata:
    name: prometheus-federation-auth
    namespace: <namespace>
    labels:
    prometheus-federation-auth: "true"
type: nginx.org/htpasswd
data:
    htpasswd: blahblahblah

GRPC, TLS

This is stated pretty clearly in the migration docs, but I’d like to highlight it again as it’s not very obvious.

If you were using nginx.ingress.kubernetes.io/backend-protocol before, you now have to specify the Kubernetes Service that was being targeted by that Backend Protocol annotation. For example, if you are accessing Keycloak behind an Ingress, you might have to move from nginx.ingress.kubernetes.io/backend-protocol: “HTTPS” to nginx.org/ssl-services: “keycloak”.

Conclusion

No plan survives contact with the enemy, so I expect when deploying this to real environments people will discover more things that are not working. For header manipulation (e.g. CORS), mTLS, rate limiting, and other complex Ingress configurations, F5 pushes you towards using their Custom Resources - Policy, VirtualServer, TransportServer…, but I hope to avoid their usage in most scenarios.

Maybe I’ll make another blog post when I find out more bugbears!

[ english  ]