Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Controller does not reconsile upon faulty ingress. #3943

Open
lkoniecz opened this issue Nov 19, 2024 · 0 comments
Open

Controller does not reconsile upon faulty ingress. #3943

lkoniecz opened this issue Nov 19, 2024 · 0 comments

Comments

@lkoniecz
Copy link

lkoniecz commented Nov 19, 2024

Describe the bug
Faulty ingress prevents subsequent ingresses from reconsiling.

Steps to reproduce

Deploy

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: long-name-ingress
  labels:
    foo:bar
  annotations:
    alb.ingress.kubernetes.io/group.name: group
    external-dns.alpha.kubernetes.io/hostname: very-very-very-very-very-very-very-very-very-very-very-very-very-very-very-long-name-ingress.foo.bar
    kubernetes.io/ingress.class: aws-load-balancer-controller
spec:
  rules:
    - host: very-very-very-very-very-very-very-very-very-very-very-very-very-very-very-long-name-ingress.foo.bar
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-service
                port:
                  name: http

after that, deploy

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: long-name-ingress
  labels:
    foo:bar
  annotations:
    alb.ingress.kubernetes.io/group.name: group
    external-dns.alpha.kubernetes.io/hostname: not-that-long-ingress.foo.bar
    kubernetes.io/ingress.class: aws-load-balancer-controller
spec:
  rules:
    - host: not-that-long-ingress.foo.bar
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-service
                port:
                  name: http

Getting:
Failed deploy model due to ValidationError: Condition value 'very-very-very-very-very-very-very-very-very-very-very-very-very-very-very-long-name-ingress.foo.bar' contains a character that is not valid status code: 400, request id: 6414deaf-fede-45ad-9328-f6c5ebbcb9ed

Expected outcome
I would expect only the faulty ingress from not being reconsiled.

Environment

  • AWS Load Balancer controller version - v2.8.1
  • Kubernetes version - 1.29
  • Using EKS (yes/no), if so version? yes. eks.13

Additional Context:
We utilize the controller in dynamically provisioned environments where the domain name varies, making it difficult to predict if we’ll exceed the 63-character hostname limit. A faulty ingress not only fails to function but also prevents other environments from being provisioned.

Additionally, hitting the unique target group quota also prevents the controller from taking further actions.

Given that ingresses are not interconnected and are materialized as separate target groups and rules, I see no reason to block valid ingresses from being reconciled due to the presence of invalid ones. Allowing valid ingresses to proceed would mitigate the impact of these issues.

The problem can manifest in various forms, as evidenced by the following:
Issue #2669
Issue #2042
Issue #3870

There is significant community demand for this functionality. Would it be possible to introduce this behavior as an opt-in feature, perhaps behind a toggle?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant