Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP fix e2e-kind-cloud-provider-loadbalancer #124729

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

danwinship
Copy link
Contributor

@danwinship danwinship commented May 7, 2024

What type of PR is this?

/kind bug
/kind cleanup
/kind failing-test

What this PR does / why we need it:

Moved out of #124660. Once we get the loadbalancer tests running again, we will get them working with cloud-provider-kind.

Notes:

  • LoadBalancers should be able to change the type and ports of a [ TCP / UDP ] service
    • Creates a ClusterIP service, checks reachability by ClusterIP, converts to NodePort, checks reachability by NodePort, converts to LoadBalancer (requesting a static IP on GCE), checks reachability by LB, changes NodePort, checks reachability by LB, changes service port, checks reachability by LB, scales to 0, checks NON-reachability by LB, scales to 1, checks reachability by LB, converts to ClusterIP, waits, checks NON-reachability by LB.
    • was SkipUnlessProviderIs("gce", "gke", "aws")
    • ✔️ passes with cpkind after removing skip and GCE-specific subtest
    • TCP version should pass on all cloud providers that support LBs. UDP version should pass on all cloud providers that support UDP LBs.
  • LoadBalancers should only allow access from service loadbalancer source ranges
    • Creates "accept" and "drop" pods, creates LB service with LoadBalancerSourceRanges: ${accept_pod_ip}, checks that accept→LB works and drop→LB does not, changes LBSR to ${drop_pod_ip}, checks that things are reversed, unsets LBSR, checks that both pods can connect.
    • was SkipUnlessProviderIs("gce", "gke", "aws", "azure")
    • 🤷 For VIP-type LBs this does not test the LB functionality; it only tests kube-proxy's implementation of LB short-circuiting and LBSR.
    • ❌ For proxy-type LBs (like cpkind), this assumes that pod-to-LB connections will not be masqueraded, which is likely to be false.\
    • Rewriting the current test to use node IPs rather than pod IPs would fix the proxy-type LB case (assuming that the LB supports LoadBalancerSourceRanges) but wouldn't improve the VIP case.
    • Connecting from the e2e.test binary itself would test the LB rather than kube-proxy in both cases.
  • LoadBalancers should have session affinity work for LoadBalancer service with ESIPP [ on / off ]
    • Creates a service with affinity and appropriate eTP, repeatedly connects (from e2e.test) and checks that it gets the same backend each time.
    • was SkipIfProviderIs("aws")
    • ❌ cpkind doesn't support affinity but this still passes sometimes
  • LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP [ on / off ]
    • Creates a service with appropriate eTP but no affinity, repeatedly connects (from e2e.test) and checks that it doesn't always get the same backend, enables affinity, checks again that it now does always get the same backend
    • was SkipIfProviderIs("aws")
    • ❌ cpkind doesn't support affinity but this still passes sometimes
  • LoadBalancers should handle load balancer cleanup finalizer for service
    • Creates LB, checks for finalizer, converts to ClusterIP, checks that finalizer is removed, converts back to LoadBalancer, checks for finalizer
    • Tests functionality from the generic cloud provider controller code; should work for all providers.
    • ✔️ passes with cpkind
  • LoadBalancers should be able to create LoadBalancer Service without NodePort and change it
    • Creates ClusterIP service, converts to LB-without-NodePort (requesting a static IP on GCE), checks reachability by LB, updates AllocateLoadBalancerNodePorts, checks that a NodePort was allocated, checks reachability by LB.
    • was SkipUnlessProviderIs("gce", "gke", "aws")
    • ❌ doesn't deal with clouds that require nodeports
    • ✔️ currently skipped by e2e-kind-cloud-provider-loadbalancer, so doesn't cause a test failure
  • LoadBalancers should be able to preserve UDP traffic when server pod cycles for a LoadBalancer service [on different nodes / on the same node]
    • Creates UDP LoadBalancer service, spawns thread to repeatedly connect to LB with same source port, adds one pod, ensures that the other thread reached it at least once, adds another pod on different/same node, deletes the first pod, ensures that the other thread reached the new pod at least once.
    • was SkipUnlessProviderIs("gce", "gke", "azure")
    • ✔️ passes with cpkind
  • LoadBalancers should not have connectivity disruption during rolling update with externalTrafficPolicy=[ Cluster / Local ]
    • Creates DaemonSet, creates LB pointing to it, spawns a thread to repeatedly connect, does 5 rolling updates of the DaemonSet, checks that number of failed connections is within tolerance.
    • ✔️ passes with cpkind
  • LoadBalancers ESIPP should work for type=LoadBalancer
    • Creates eTP:Local LB service, connects via LB (from e2e.test), checks if source IP was preserved
    • was SkipUnlessProviderIs("gce", "gke")
    • ❌ semantics of eTP:Local via Proxy-type LBs are not well-defined?
    • ✔️ skipped for Proxy-type LBs (with this PR)
  • LoadBalancers ESIPP should work for type=NodePort
    • Creates eTP:Local NodePort service, connects from pod to every node's NodePort, checks if source IP was preserved
    • was SkipUnlessProviderIs("gce", "gke")
    • 🤷 does not actually use a LoadBalancer, but is part of LB tests
    • ❌ assumes pod-to-different-node-nodeIP is unmasqueraded, which is not true for most network plugins; should use hostNetwork pod instead
  • LoadBalancers ESIPP should only target nodes with endpoints
    • Creates eTP:Local LB service. Picks 3 nodes. For each node: adds endpoints on that node, checks reachability via LB, checks HealthCheckNodePorts on all nodes, delete endpoints.
    • was SkipUnlessProviderIs("gce", "gke")
    • ✔️ passes with cpkind
  • LoadBalancers ESIPP should work from pods
    • Creates eTP:Local LB service, connects via LB from pod, checks if source IP was preserved
    • was SkipUnlessProviderIs("gce", "gke")
    • ❌ semantics of eTP:Local via Proxy-type LBs are not well-defined?
    • ✔️ skipped for Proxy-type LBs (with this PR)
  • LoadBalancers ESIPP should handle updates to ExternalTrafficPolicy field
    • Creates eTP:Local LB service, saves its HealthCheckNodePort, converts to eTP:Cluster, confirms that it's reachable on NodePorts from every node where it doesn't have an endpoint, confirms that the old HealthCheckNodePort does not report healthiness on the nodes where it does have an endpoint, confirms that traffic through the LB IP does not preserve source IP, converts the service back to Local (with same HCNP), confirms that the LB IP now does preserve source IP
    • (There are multiple problems with this test...)
    • was SkipUnlessProviderIs("gce", "gke")
    • ❌ semantics of eTP:Local via Proxy-type LBs are not well-defined?
    • ❌ does not pass with cpkind

Does this PR introduce a user-facing change?

NONE

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. release-note-none Denotes a PR that doesn't merit a release note. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. kind/bug Categorizes issue or PR as related to a bug. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 7, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-priority Indicates a PR lacks a `priority/foo` label and requires one. area/test sig/network Categorizes an issue or PR as relevant to SIG Network. sig/testing Categorizes an issue or PR as relevant to SIG Testing. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels May 7, 2024
@k8s-ci-robot k8s-ci-robot requested review from SataQiu and tnqn May 7, 2024 15:41
@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 7, 2024
Allow either drop or reject; we previously made the same change for
TCP load balancers.
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels May 9, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: danwinship

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

The existing test had two problems:

  - It only made connections from within the cluster, so for VIP-type
    LBs, the connections would always be short-circuited and so this
    only tested kube-proxy's LBSR implementation, not the cloud's.

  - For non-VIP-type LBs, it would only work if pod-to-LB connections
    were not masqueraded, which is not the case for most network
    plugins.

Fix this by (a) testing connectivity from the test binary, so as to
test filtering external IPs, and ensure we're testing the cloud's
behavior; and (b) using both pod and node IPs when testing the
in-cluster case.

Also some general cleanup of the test case.
(And in particular, remove `[Feature:LoadBalancer]` from it.)
It previously assumed that pod-to-other-node-nodeIP would be
unmasqueraded, but this is not the case for most network plugins. Use
a HostNetwork exec pod to avoid problems.

Also, fix up a bit for dual-stack.
The LoadBalancer test "should handle updates to ExternalTrafficPolicy
field" made multiple unsafe assumptions; in particular, that nodeports
never get recycled, and that Cluster traffic policy services are
*required* (rather than merely *allowed*) to masquerade traffic.

There is really no good way to prove that a given load balancer is
implementing Cluster behavior rather than Local behavior without doing
[Disruptive] things to the cluster.

However, we can't test that HealthCheckNodePorts are working correctly
without creating a LoadBalancer, so rework the test to be more about
that.
@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels May 10, 2024
Comment on lines +825 to +830
// FIXME: this test is supposed to ensure that "a LoadBalancer UDP service
// doesn't blackhole the traffic to the node when the pod backend is
// destroyed and the traffic has to fall back to another pod", but it's
// possible that a connection to backend 2 will succeed at this point
// before backend 1 is deleted, in which case no further successful
// connections are required to pass.
Copy link
Member

@aojea aojea May 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can not happen, the test waits until pod1 is deleted and also validates the endpoints are missing to avoid this problem

  e2epod.NewPodClient(f).DeleteSync(ctx, podBackend1, metav1.DeleteOptions{}, e2epod.DefaultPodDeletionTimeout)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It waits until pod 1 is deleted to start checking if pod 2 has been reached, but it doesn't ensure that pod 2 was actually reached after pod 1 was deleted.

main thread                      DialUDP goroutine
-----------                      -----------------
"creating a backend pod"
"checking ... backend 1"
...                              DialUDP()
...                              hostnames.Insert("hostname1")
hostnames.Has("hostname1")
"creating a second backend pod"
                                 DialUDP()
                                 hostnames.Insert("hostname2")
DeleteSync
           (kube-proxy breaks here due to bugs)
                                 DialUDP()
                                 Logf("Failed to connect")
"checking ... backend 2"
hostnames.Has("hostname2")
Zarro boogs found!

Copy link
Member

@aojea aojea May 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we use always the same source ip and source port (see L754) , you can't get to backend2 unless you use a round robin per packet for UDP loadbalancers, but that is not common and I would say this is safe assumption

 			laddr, err := net.ResolveUDPAddr("udp", ":54321")
			if err != nil {
				framework.Failf("Failed to resolve local address: %v", err)
			}

@@ -2706,6 +2692,40 @@ var _ = common.SIGDescribe("Services", func() {
}
})

ginkgo.It("should support externalTrafficPolicy=Local for type=NodePort", func(ctx context.Context) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is failing, we have this test here #123622 where we removed one of the use cases

@aojea
Copy link
Member

aojea commented May 13, 2024

There are 4 failing and 3 flakes in current state, https://testgrid.k8s.io/sig-network-kind#sig-network-kind,%20loadbalancer
With this PR I don´t know if the flakes will get solved.

The test for nodePort and externalTrafficPolicy local I think is not correct

Session affinity is implemented now and looks correct, you can see in the testgrid those are not failing, the problem seems to be with externalTrafficPolicy

@danwinship
Copy link
Contributor Author

I was having trouble fixing the eTP:Local / NodePort test because there is no obvious set of assumptions about source IP preservation that works for both GCE and kind.

The test currently uses pod-network pods and expects that pod-to-other-node-IP will preserve source IP, which is not something we require.

I tried rewriting it to use host-network pods, assuming that host-network-pod-to-other-node-IP would use the node IP as the source IP, which works on kind, but fails on GCE because the node-to-node connection ends up using the docker0(?) IP as the source IP rather than the primary node IP. (That is, it appears to use the .1 IP from the node's PodCIDR.) (Lots of plugins end up using a "pod-like" node IP for node-to-pod-network traffic. It's surprising that GCE uses it for node-to-node traffic, but k8s doesn't actually require anything specific for that case, so it's allowed.)

Anyway, that was the point when I pinged you about disabling the kind loadbalancer job, and then I haven't gotten back to it since. Might need to do something like finding all of the IPs on the node / host-network pod, and allowing any of them. (Need to actually check the interfaces, not look at node.Status.Addresses.)

@aojea
Copy link
Member

aojea commented May 15, 2024

pod-to-other-node-IP will preserve source IP

we have discussed this in the past and we concluded that masquerading there was common so it was accepted

We are close

/test pull-kubernetes-e2e-kind-cloud-provider-loadbalancer

@aojea
Copy link
Member

aojea commented May 16, 2024

/test pull-kubernetes-e2e-kind-cloud-provider-loadbalancer

// Make sure dropPod is running. There are certain chances that the pod might be terminated due to unexpected reasons.
dropPod := e2epod.CreateExecPodOrFail(ctx, cs, namespace, "execpod-drop",
func(pod *v1.Pod) {
pod.Spec.NodeName = nodes.Items[1].Name
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avoid hardcoding the nodename to not shortcut the scheduler, this had bad consequences on the e2e ending on flakiness , you can see that most of the tests use the following pattern

		nodeSelection := e2epod.NodeSelection{Name: nodes.Items[0].Name}
		e2epod.SetNodeSelection(&serverPod1.Spec, nodeSelection)
		e2epod.NewPodClient(f).CreateSync(ctx, serverPod1)

// https://issues.k8s.io/123714
ingress := &svc.Status.LoadBalancer.Ingress[0]
if ingress.IP == "" || (ingress.IPMode != nil && *ingress.IPMode == v1.LoadBalancerIPModeProxy) {
e2eskipper.Skipf("LoadBalancer uses 'Proxy' IPMode")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is working fine, I can see how these tests are skipped now https://testgrid.k8s.io/sig-network-kind#pr-sig-network-kind,%20loadbalancer

framework.Failf("Source IP was NOT preserved")
}
} else {
gomega.Expect(err).To(gomega.HaveOccurred(), "should not have been able to reach via %s:%d", ips[0], nodePort)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this fails here now

I0515 23:34:23.117414 71876 loadbalancer.go:1291] ClientIP:Port detected by target pod using NodePort is 172.18.0.3:32882, the IP of test container is 172.18.0.3
STEP: ensuring that the service is not reachable via 172.18.0.3:32118 - k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1288 @ 05/15/24 23:34:23.117
I0515 23:34:23.117509 71876 builder.go:121] Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:36985 --kubeconfig=/root/.kube/kind-test-config --namespace=esipp-7280 exec host-test-container-pod -- /bin/sh -x -c curl -g -q -s --connect-timeout 30 http://172.18.0.3:32118/clientip'
I0515 23:34:23.460353 71876 builder.go:146] stderr: "+ curl -g -q -s --connect-timeout 30 http://172.18.0.3:32118/clientip\n"
I0515 23:34:23.460388 71876 builder.go:147] stdout: "172.18.0.3:50884"
I0515 23:34:23.460400 71876 loadbalancer.go:1291] ClientIP:Port detected by target pod using NodePort is 172.18.0.3:50884, the IP of test container is 172.18.0.3
[FAILED] should not have been able to reach via 172.18.0.3:32118

@aojea
Copy link
Member

aojea commented May 16, 2024

The UDP one fails because envoy crashes, seems related envoyproxy/envoy#14866

[2024-05-16 07:11:43.483][101][critical][backtrace] [./source/server/backtrace.h:127] Caught Segmentation fault, suspect faulting address 0x18
[2024-05-16 07:11:43.483][101][critical][backtrace] [./source/server/backtrace.h:111] Backtrace (use tools/stack_decode.py to get line numbers):
[2024-05-16 07:11:43.483][101][critical][backtrace] [./source/server/backtrace.h:112] Envoy version: 816188b86a0a52095b116b107f576324082c7c02/1.30.1/Clean/RELEASE/BoringSSL
[2024-05-16 07:11:43.483][101][critical][backtrace] [./source/server/backtrace.h:114] Address mapping: 5585541ca000-558556b72000 /usr/local/bin/envoy
[2024-05-16 07:11:43.483][101][critical][backtrace] [./source/server/backtrace.h:121] #0: [0x7f9a5f4e1520]
[2024-05-16 07:11:43.483][101][critical][backtrace] [./source/server/backtrace.h:121] #1: [0x5585548d4caa]
[2024-05-16 07:11:43.483][101][critical][backtrace] [./source/server/backtrace.h:121] #2: [0x5585548d2bbf]
[2024-05-16 07:11:43.483][101][critical][backtrace] [./source/server/backtrace.h:121] #3: [0x5585561ed36d]
[2024-05-16 07:11:43.484][101][critical][backtrace] [./source/server/backtrace.h:121] #4: [0x5585561f1600]
[2024-05-16 07:11:43.484][101][critical][backtrace] [./source/server/backtrace.h:121] #5: [0x55855651bcc1]
[2024-05-16 07:11:43.484][101][critical][backtrace] [./source/server/backtrace.h:121] #6: [0x55855651998f]
[2024-05-16 07:11:43.484][101][critical][backtrace] [./source/server/backtrace.h:121] #7: [0x55855651ab2f]
[2024-05-16 07:11:43.484][101][critical][backtrace] [./source/server/backtrace.h:121] #8: [0x5585561f0fa6]
[2024-05-16 07:11:43.484][101][critical][backtrace] [./source/server/backtrace.h:121] #9: [0x5585561f0d4e]
[2024-05-16 07:11:43.484][101][critical][backtrace] [./source/server/backtrace.h:121] #10: [0x5585562ea081]
[2024-05-16 07:11:43.484][101][critical][backtrace] [./source/server/backtrace.h:121] #11: [0x5585562eb62d]
[2024-05-16 07:11:43.484][101][critical][backtrace] [./source/server/backtrace.h:121] #12: [0x55855653bd40]
[2024-05-16 07:11:43.484][101][critical][backtrace] [./source/server/backtrace.h:121] #13: [0x55855653a681]
[2024-05-16 07:11:43.484][101][critical][backtrace] [./source/server/backtrace.h:121] #14: [0x558555b79f9f]
[2024-05-16 07:11:43.484][101][critical][backtrace] [./source/server/backtrace.h:121] #15: [0x5585565b7d03]
[2024-05-16 07:11:43.484][101][critical][backtrace] [./source/server/backtrace.h:121] #16: [0x7f9a5f533ac3]

@aojea
Copy link
Member

aojea commented May 16, 2024

Probably fixed 3 weeks ago envoyproxy/envoy#33824

@aojea
Copy link
Member

aojea commented May 16, 2024

opened envoyproxy/envoy#34195

@aojea
Copy link
Member

aojea commented May 16, 2024

workaround for udp merged kubernetes-sigs/cloud-provider-kind#68

/test pull-kubernetes-e2e-kind-cloud-provider-loadbalancer

@aojea
Copy link
Member

aojea commented May 16, 2024

/test pull-kubernetes-e2e-kind-cloud-provider-loadbalancer

👀 https://testgrid.k8s.io/sig-network-kind#pr-sig-network-kind,%20loadbalancer

@k8s-ci-robot
Copy link
Contributor

@danwinship: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-e2e-gce-providerless 8f7ac69 link false /test pull-kubernetes-e2e-gce-providerless
pull-kubernetes-e2e-kind-nftables 0640383 link false /test pull-kubernetes-e2e-kind-nftables
pull-kubernetes-e2e-kind 0640383 link true /test pull-kubernetes-e2e-kind
pull-kubernetes-e2e-kind-ipv6 0640383 link true /test pull-kubernetes-e2e-kind-ipv6
pull-kubernetes-verify 0640383 link true /test pull-kubernetes-verify
pull-kubernetes-e2e-gce-network-policies 0640383 link false /test pull-kubernetes-e2e-gce-network-policies
pull-kubernetes-e2e-gce 0640383 link true /test pull-kubernetes-e2e-gce
pull-kubernetes-e2e-gci-gce-ipvs 0640383 link false /test pull-kubernetes-e2e-gci-gce-ipvs
pull-kubernetes-e2e-kind-cloud-provider-loadbalancer 0640383 link false /test pull-kubernetes-e2e-kind-cloud-provider-loadbalancer

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@aojea
Copy link
Member

aojea commented May 16, 2024

ugh, the test panics now?

STEP: changing the UDP service's port - k8s.io/kubernetes/test/e2e/network/loadbalancer.go:364 @ 05/16/24 10:14:40.621
[PANICKED] Test Panicked
In [It] at: runtime/panic.go:114 @ 05/16/24 10:14:40.632

runtime error: index out of range [0] with length 0

Full Stack Trace
  k8s.io/kubernetes/test/e2e/network.init.func20.4({0x7fd61970b800, 0xc001b107b0})
  	k8s.io/kubernetes/test/e2e/network/loadbalancer.go:377 +0xf9d
< Exit [It] should be able to change the type and ports of a UDP service [Slow] - k8s.io/kubernetes/test/e2e/network/loadbalancer.go:275 @ 05/16/24 10:14:40.632 (1m15.197s)
> Enter [AfterEach] [sig-network] LoadBalancers [Feature:LoadBalancer] - k8s.io/kubernetes/test/e2e/network/loadbalancer.go:127 @ 05/16/24 10:14:40.632
I0516 10:14:40.632990 71875 util.go:81] 
Output of kubectl describe svc:

I0516 10:14:40.633060 71875 builder.go:121] Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:45793 --kubeconfig=/root/.kube/kind-test-config --namespace=loadbalancers-2905 describe svc --namespace=loadbalancers-2905'
I0516 10:14:40.746762 71875 builder.go:146] stderr: ""
I0516 10:14:40.746835 71875 builder.go:147] stdout: "Name:   

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/bug Categorizes issue or PR as related to a bug. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. release-note-none Denotes a PR that doesn't merit a release note. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants