-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
only ipv6 monitor error #8765
Comments
Hm, setting that field should enable the following annotations on the calico/node DaemonSet: I would expect Prometheus to pick up on those, but perhaps we also need to add the containerPort to the spec when that happens? Right now, the only options available for CalicoNodeDaemonSet would be the resource requests and requirements, per the API: https://github.com/tigera/operator/blob/e5880ff2edf627c98cf12ad29f24bcab011be78d/api/v1/calico_node_types.go#L25-L37 Likely not the problem, but |
In the current ipv6 environment, if you add the containerPort, the Input Labels are address="[fddd:3bcc:a689::66]:$port".
At present, I use a less rigorous relabel_configs and it can meet the usage requirements, but I don’t know if there will be problems in the future.
So I think if you can add the containerPort, it is the simplest solution. |
I am using helm to install tigera-operator. Because I need to monitor, I have opened nodeMetricsPort: 9091, but in the only ipv6 environment, address="fddd:3bcc:a689::66" is found in the Input Labels captured by prometheus, resulting in The monitoring rule failed. Normally it should be address="[fddd:3bcc:a689::66]:$port", so I hope to inject a port configuration into it:
But the injection failed. I know it may be a problem with my configuration, but I can't find an example. My wrong configuration is:
The text was updated successfully, but these errors were encountered: