Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MON-3513: Add availability test for Metrics API #28737

Merged
merged 1 commit into from May 7, 2024

Conversation

machine424
Copy link
Contributor

@machine424 machine424 commented Apr 24, 2024

This should ensure the availability of the Metrics API during e2e tests including upgrades.
Thus it should also help with https://issues.redhat.com/browse/MON-3539.

The correctness of the API: whether the right/expected content is returned, should be tested elsewhere (we already have tests for that in CMO, and the HPA tests already make use of that etc.). This tests only check the availability.

  • Got inspired by the existing availability tests.
  • Maybe this could be generalized for all the aggregation layer?

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Apr 24, 2024
@openshift-ci-robot
Copy link

openshift-ci-robot commented Apr 24, 2024

@machine424: This pull request references MON-3513 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the task to target the "4.16.0" version, but no target version was set.

In response to this:

This should ensure the availability of the Metrics API during e2e tests including upgrades.
Thus it should also help with https://issues.redhat.com/browse/MON-3539.

The correctness of the API: whether the right/expected content is returned, should be tested elsewhere (we already have tests for that in CMO, and the HPA tests already make use of that etc.). This tests only check the availability.

  • Got inspired by the existing availability tests.
  • Maybe this could be generalized for all the aggregation layer.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Apr 24, 2024

@machine424: This pull request references MON-3513 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the task to target the "4.16.0" version, but no target version was set.

In response to this:

This should ensure the availability of the Metrics API during e2e tests including upgrades.
Thus it should also help with https://issues.redhat.com/browse/MON-3539.

The correctness of the API: whether the right/expected content is returned, should be tested elsewhere (we already have tests for that in CMO, and the HPA tests already make use of that etc.). This tests only check the availability.

  • Got inspired by the existing availability tests.
  • Maybe this could be generalized for all the aggregation layer?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Apr 24, 2024

@machine424: This pull request references MON-3513 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the task to target the "4.16.0" version, but no target version was set.

In response to this:

This should ensure the availability of the Metrics API during e2e tests including upgrades.
Thus it should also help with https://issues.redhat.com/browse/MON-3539.

The correctness of the API: whether the right/expected content is returned, should be tested elsewhere (we already have tests for that in CMO, and the HPA tests already make use of that etc.). This tests only check the availability.

  • Got inspired by the existing availability tests.
  • Maybe this could be generalized for all the aggregation layer?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@machine424
Copy link
Contributor Author

/retest

2 similar comments
@machine424
Copy link
Contributor Author

/retest

@machine424
Copy link
Contributor Author

/retest

Copy link
Contributor

@dgoodwin dgoodwin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll need to check back once you've got job results to make sure the data in artifacts looks good, but this is really cool, and one of very few instances where someone has added additional disruption monitoring for their component.

disruptionBackedName := "metrics-api"

newConnectionTestName := "[sig-instrumentation] disruption/metrics-api connection/new should be available throughout the test"
reusedConnectionTestName := "[sig-instrumentation] disruption/metrics-api connection/reused should be available throughout the test"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just want to share some info on these tests, they're extremely forgiving, P99 over past three weeks + a healthy margin before they'll fail. Our primary system for detecting real regressions uses large sets of data and alerting, but this dashboard provides a strong visualization of that same data: https://grafana-loki.ci.openshift.org/d/ISnBj4LVk/disruption?orgId=1 Very handy if you want to monitor how your component is doing. Once this lands, it'll eventually start appearing in TRT's alerting if something is wrong and we'll get in touch if we detect a sustained change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really interesting, thanks for the details.
We'll try ti have a look at those dashboards from time to time.

// TODO: clean up/refactor following.

// For nodes metrics
newConnections, err := createAPIServerBackendSampler(adminRESTConfig, disruptionBackedName, "/apis/metrics.k8s.io/v1beta1/nodes", monitorapi.NewConnectionType)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How big is the response from this request? Looks like it could be pretty big, and we'll be polling it once a second. Could we identify a smaller one if it is quite large?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'll depend on the nodes count.
I pushed a new version where we only ask for the metrics of the Metrics API backend Pods themselves.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I think that checking "/apis/metrics.k8s.io/v1beta1" is sufficient.
The response looks like

oc get --raw "/apis/metrics.k8s.io/v1beta1"  | jq
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "nodes",
      "singularName": "node",
      "namespaced": false,
      "kind": "NodeMetrics",
      "verbs": [
        "get",
        "list"
      ]
    },
    {
      "name": "pods",
      "singularName": "pod",
      "namespaced": true,
      "kind": "PodMetrics",
      "verbs": [
        "get",
        "list"
      ]
    }
  ]
}

And when the backends are down:

Error from server (ServiceUnavailable): the server is currently unable to handle the request

reusedConnections, err = createAPIServerBackendSampler(adminRESTConfig, disruptionBackedName, fmt.Sprintf("/apis/metrics.k8s.io/v1beta1/namespaces/%s/pods", monitoringNamespace), monitorapi.ReusedConnectionType)
if err != nil {
return err
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like you're re-using the disruptionBackedName here, which is probably a bug as both results are probably getting muddled or ignored, but do you actually need a second set of endpoints being monitored here? Typically we'd only have one pair, making sure the APIserver handling the request is up, not that the various APIs within that server are working. Assuming these are handled by the same APIServer, I'd pitch dropping one of these sets and just hit one new+reused pair. The cpu and network load of these requests every second is not insignificant.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, as mentioned in the description #28737 (comment), we're only interested in the availability (codes 200-399), pushed a new version.
I hope it's ok now.

@openshift-trt-bot
Copy link

Job Failure Risk Analysis for sha: 93a0eff

Job Name Failure Risk
pull-ci-openshift-origin-master-e2e-openstack-ovn IncompleteTests
Tests for this run (16) are below the historical average (1563): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-metal-ipi-sdn IncompleteTests
Tests for this run (15) are below the historical average (982): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-metal-ipi-ovn-ipv6 IncompleteTests
Tests for this run (15) are below the historical average (847): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-gcp-ovn-upgrade IncompleteTests
Tests for this run (19) are below the historical average (665): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-gcp-ovn-rt-upgrade IncompleteTests
Tests for this run (19) are below the historical average (572): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-gcp-ovn IncompleteTests
Tests for this run (18) are below the historical average (1472): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-gcp-csi IncompleteTests
Tests for this run (18) are below the historical average (557): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-aws-ovn-upgrade IncompleteTests
Tests for this run (20) are below the historical average (668): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-aws-ovn-single-node-upgrade IncompleteTests
Tests for this run (19) are below the historical average (2111): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-aws-ovn-single-node-serial IncompleteTests
Tests for this run (18) are below the historical average (708): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-aws-ovn-single-node IncompleteTests
Tests for this run (18) are below the historical average (1573): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-aws-ovn-serial IncompleteTests
Tests for this run (18) are below the historical average (757): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-aws-ovn-fips IncompleteTests
Tests for this run (18) are below the historical average (1270): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-aws-ovn-cgroupsv2 IncompleteTests
Tests for this run (18) are below the historical average (1577): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-aws-csi IncompleteTests
Tests for this run (18) are below the historical average (665): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)
pull-ci-openshift-origin-master-e2e-agnostic-ovn-cmd IncompleteTests
Tests for this run (18) are below the historical average (522): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)

@machine424
Copy link
Contributor Author

I'll need to check back once you've got job results to make sure the data in artifacts looks good, but this is really cool, and one of very few instances where someone has added additional disruption monitoring for their component.

Thanks, I'll check with the team, and maybe we'll add more for the other components (prometheus, etc.). The monitortest framework really makes it easy to write such tests, good job on that :)

What do you think about adding (in another PR), a generic disruptioncheck for all the aggregation layer APIs?, I assume it's possible to get the available APIs from Kube or we can provide a list of URLs...

I'll continue testing, make the tests fail learn more about the behavior, then I'll ask you for another review, once the CI is greener :)

var deployment *appsv1.Deployment
deployment, err = kubeClient.AppsV1().Deployments(monitoringNamespace).Get(ctx, metricsServerDeploymentName, metav1.GetOptions{})
if apierrors.IsNotFound(err) {
// TODO: remove this in 4.17
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: open ticket for this before merge

@machine424 machine424 force-pushed the metr-ser branch 2 times, most recently from 6336ec7 to 4d48684 Compare April 24, 2024 17:06
@machine424
Copy link
Contributor Author

/retest

@openshift-trt-bot
Copy link

Job Failure Risk Analysis for sha: 4d48684

Job Name Failure Risk
pull-ci-openshift-origin-master-e2e-gcp-csi IncompleteTests
Tests for this run (25) are below the historical average (542): IncompleteTests (not enough tests ran to make a reasonable risk analysis; this could be due to infra, installation, or upgrade problems)

@machine424
Copy link
Contributor Author

/retest

@machine424
Copy link
Contributor Author

/payload-with-prs pull-ci-openshift-origin-master-e2e-aws-ovn-upgrade openshift/api#1865 openshift/cluster-monitoring-operator#2268

Copy link
Contributor

openshift-ci bot commented Apr 25, 2024

@machine424: it appears that you have attempted to use some version of the payload command, but your comment was incorrectly formatted and cannot be acted upon. See the docs for usage info.

@machine424
Copy link
Contributor Author

/payload-with-prs periodic-ci-openshift-release-master-ci-4.16-e2e-aws-ovn-upgrade openshift/api#1865 openshift/cluster-monitoring-operator#2268

Copy link
Contributor

openshift-ci bot commented Apr 25, 2024

@machine424: it appears that you have attempted to use some version of the payload command, but your comment was incorrectly formatted and cannot be acted upon. See the docs for usage info.

@machine424
Copy link
Contributor Author

/payload-job-with-prs periodic-ci-openshift-release-master-ci-4.16-e2e-aws-ovn-upgrade openshift/api#1865 openshift/cluster-monitoring-operator#2268

Copy link
Contributor

openshift-ci bot commented Apr 25, 2024

@machine424: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-ci-4.16-e2e-aws-ovn-upgrade

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/ce85df90-02de-11ef-8c62-2e3db784f5f9-0

@machine424
Copy link
Contributor Author

/payload-job-with-prs pull-ci-openshift-origin-master-e2e-aws-ovn-upgrade openshift/api#1865 openshift/cluster-monitoring-operator#2268

Copy link
Contributor

openshift-ci bot commented Apr 25, 2024

@machine424: trigger 0 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

@machine424
Copy link
Contributor Author

/payload-job-with-prs periodic-ci-openshift-release-master-ci-4.16-e2e-aws-ovn openshift/api#1865 openshift/cluster-monitoring-operator#2268

Copy link
Contributor

openshift-ci bot commented Apr 25, 2024

@machine424: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-ci-4.16-e2e-aws-ovn

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/77395d40-02e1-11ef-97dd-2274b69c7a75-0

@machine424
Copy link
Contributor Author

/retest pull-ci-openshift-origin-master-e2e-aws-ovn-upgrade

Copy link
Contributor

openshift-ci bot commented Apr 25, 2024

@machine424: The /retest command does not accept any targets.
The following commands are available to trigger required jobs:

  • /test e2e-aws-jenkins
  • /test e2e-aws-ovn-fips
  • /test e2e-aws-ovn-image-registry
  • /test e2e-aws-ovn-serial
  • /test e2e-gcp-ovn
  • /test e2e-gcp-ovn-builds
  • /test e2e-gcp-ovn-image-ecosystem
  • /test e2e-gcp-ovn-upgrade
  • /test e2e-metal-ipi-ovn-ipv6
  • /test images
  • /test lint
  • /test unit
  • /test verify
  • /test verify-deps

The following commands are available to trigger optional jobs:

  • /test 4.12-upgrade-from-stable-4.11-e2e-aws-ovn-upgrade-rollback
  • /test e2e-agnostic-ovn-cmd
  • /test e2e-aws
  • /test e2e-aws-csi
  • /test e2e-aws-disruptive
  • /test e2e-aws-etcd-recovery
  • /test e2e-aws-multitenant
  • /test e2e-aws-ovn
  • /test e2e-aws-ovn-cgroupsv2
  • /test e2e-aws-ovn-etcd-scaling
  • /test e2e-aws-ovn-kubevirt
  • /test e2e-aws-ovn-single-node
  • /test e2e-aws-ovn-single-node-serial
  • /test e2e-aws-ovn-single-node-upgrade
  • /test e2e-aws-ovn-upgrade
  • /test e2e-aws-ovn-upi
  • /test e2e-aws-proxy
  • /test e2e-azure
  • /test e2e-azure-ovn-etcd-scaling
  • /test e2e-baremetalds-kubevirt
  • /test e2e-gcp-csi
  • /test e2e-gcp-disruptive
  • /test e2e-gcp-fips-serial
  • /test e2e-gcp-ovn-etcd-scaling
  • /test e2e-gcp-ovn-rt-upgrade
  • /test e2e-gcp-ovn-techpreview
  • /test e2e-gcp-ovn-techpreview-serial
  • /test e2e-metal-ipi-ovn-dualstack
  • /test e2e-metal-ipi-ovn-dualstack-local-gateway
  • /test e2e-metal-ipi-sdn
  • /test e2e-metal-ipi-serial
  • /test e2e-metal-ipi-serial-ovn-ipv6
  • /test e2e-metal-ipi-virtualmedia
  • /test e2e-openstack-ovn
  • /test e2e-openstack-serial
  • /test e2e-vsphere
  • /test e2e-vsphere-ovn-dualstack-primaryv6
  • /test e2e-vsphere-ovn-etcd-scaling
  • /test okd-e2e-gcp

Use /test all to run the following jobs that were automatically triggered:

  • pull-ci-openshift-origin-master-e2e-agnostic-ovn-cmd
  • pull-ci-openshift-origin-master-e2e-aws-csi
  • pull-ci-openshift-origin-master-e2e-aws-ovn-cgroupsv2
  • pull-ci-openshift-origin-master-e2e-aws-ovn-fips
  • pull-ci-openshift-origin-master-e2e-aws-ovn-serial
  • pull-ci-openshift-origin-master-e2e-aws-ovn-single-node
  • pull-ci-openshift-origin-master-e2e-aws-ovn-single-node-serial
  • pull-ci-openshift-origin-master-e2e-aws-ovn-single-node-upgrade
  • pull-ci-openshift-origin-master-e2e-aws-ovn-upgrade
  • pull-ci-openshift-origin-master-e2e-gcp-csi
  • pull-ci-openshift-origin-master-e2e-gcp-ovn
  • pull-ci-openshift-origin-master-e2e-gcp-ovn-rt-upgrade
  • pull-ci-openshift-origin-master-e2e-gcp-ovn-upgrade
  • pull-ci-openshift-origin-master-e2e-metal-ipi-ovn-ipv6
  • pull-ci-openshift-origin-master-e2e-metal-ipi-sdn
  • pull-ci-openshift-origin-master-e2e-openstack-ovn
  • pull-ci-openshift-origin-master-images
  • pull-ci-openshift-origin-master-lint
  • pull-ci-openshift-origin-master-unit
  • pull-ci-openshift-origin-master-verify
  • pull-ci-openshift-origin-master-verify-deps

In response to this:

/retest pull-ci-openshift-origin-master-e2e-aws-ovn-upgrade

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@machine424
Copy link
Contributor Author

/test e2e-gcp-ovn-upgrade e2e-aws-ovn-upgrade e2e-aws-ovn

@dgoodwin
Copy link
Contributor

I'll need to check back once you've got job results to make sure the data in artifacts looks good, but this is really cool, and one of very few instances where someone has added additional disruption monitoring for their component.

Thanks, I'll check with the team, and maybe we'll add more for the other components (prometheus, etc.). The monitortest framework really makes it easy to write such tests, good job on that :)

What do you think about adding (in another PR), a generic disruptioncheck for all the aggregation layer APIs?, I assume it's possible to get the available APIs from Kube or we can provide a list of URLs...

I'll continue testing, make the tests fail learn more about the behavior, then I'll ask you for another review, once the CI is greener :)

It's an awesome idea but we've caught this framework noticably impacting CPU/memory/network when the polling increased dramatically. If this involved a bunch of new endpoints, it very likely could start to skew CPU at the very least in ways that show up and send people on wild goose chases. If we knew about it and were expecting it, perhaps we could get away with it though, provided the resource requirements are not significantly different. How many endpoints would we be talking about?

@openshift-trt-bot
Copy link

Job Failure Risk Analysis for sha: 5fed856

Job Name Failure Risk
pull-ci-openshift-origin-master-e2e-aws-ovn-single-node-serial Medium
[bz-networking][invariant] alert/OVNKubernetesResourceRetryFailure should not be at or above info
This test has passed 95.00% of 60 runs on jobs ['periodic-ci-openshift-release-master-nightly-4.16-e2e-aws-ovn-single-node-serial'] in the last 14 days.

Open Bugs
MCD degraded on content mismatch for resolv-prepender script

@machine424
Copy link
Contributor Author

It's an awesome idea but we've caught this framework noticably impacting CPU/memory/network when the polling increased dramatically. If this involved a bunch of new endpoints, it very likely could start to skew CPU at the very least in ways that show up and send people on wild goose chases. If we knew about it and were expecting it, perhaps we could get away with it though, provided the resource requirements are not significantly different. How many endpoints would we be talking about?

I see, I just need to discuss that with the team but I can think of 3 or 4 APIs.
Regarding the generic test for the other aggreg APIs, I was suggesting that for the other teams (that may be interested in such tests), we, monitoring, only provide the "Metrics API"

This should ensure the availability of the Metrics API during e2e tests including upgrades.
Thus it should also help with https://issues.redhat.com/browse/MON-3539.

The correctness of the API: whether the right/expected content is returned, should be tested elsewhere (we already have tests for that in CMO, and the HPA tests already make use of that etc.). This tests only check the availability.
@machine424
Copy link
Contributor Author

/test e2e-gcp-ovn-upgrade e2e-aws-ovn-upgrade e2e-aws-ovn

@openshift-trt-bot
Copy link

Job Failure Risk Analysis for sha: 2deef4c

Job Name Failure Risk
pull-ci-openshift-origin-master-e2e-aws-ovn-single-node-serial Low
[sig-arch] events should not repeat pathologically for ns/openshift-etcd
This test has passed 77.50% of 40 runs on release 4.16 [amd64 aws ovn serial single-node] in the last week.

1 similar comment
@openshift-trt-bot
Copy link

Job Failure Risk Analysis for sha: 2deef4c

Job Name Failure Risk
pull-ci-openshift-origin-master-e2e-aws-ovn-single-node-serial Low
[sig-arch] events should not repeat pathologically for ns/openshift-etcd
This test has passed 77.50% of 40 runs on release 4.16 [amd64 aws ovn serial single-node] in the last week.

@dgoodwin
Copy link
Contributor

dgoodwin commented May 7, 2024

Code looks good, I see the data coming out looking good.

Adding another 3-4 of these we could probably get away with, your generic framework could be very useful.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label May 7, 2024
Copy link
Contributor

openshift-ci bot commented May 7, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dgoodwin, machine424

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 7, 2024
@machine424
Copy link
Contributor Author

/retest

1 similar comment
@machine424
Copy link
Contributor Author

/retest

Copy link
Contributor

openshift-ci bot commented May 7, 2024

@machine424: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-single-node-upgrade 2deef4c link false /test e2e-aws-ovn-single-node-upgrade

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot bot merged commit c38cf13 into openshift:master May 7, 2024
22 of 23 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants