OpenELB assigned IP addresses sometimes inaccessible #378
tracstonlabs
started this conversation in
General
Replies: 1 comment
-
https://github.com/openelb/openelb/blob/master/pkg/speaker/layer2/arp.go#L154 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello
I'm having a strange issue with openelb I have 3 baremetal servers and VM control-plane installed on Vmware they are not installed on the same network
VM Control-plane: 192.168.0.80
Baremetal Worker-1: 192.168.100.2
Baremetal Worker-2: 192.168.100.3
Baremetal Worker-3: 192.168.100.4
I configured an EIP pool 256
there is no firewall and routes working properly between VLAN's
when I'm deploying about 7 pods in the same helmchart each pod has service Loadbalancer that need to be accessible outside for example MYSQL service.
It seems like everything is working good most of the times and the IP address has been assigned. but only for 6 out of 7 pods
and I get the following error
"level":"error","ts":1704627808.5521305,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"LBController","request":"my-namespace/mysql","error":"failed to resolve ip 10.100.100.80 err=no usable interface found","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
{"level":"info","ts":1704627809.55248,"msg":"setup openelb service","service":"my-namespace/kafka"}
the IP addresses assigned properly
192.168.100.124
192.168.100.121
192.168.100.123
192.168.100.125
192.168.100.120
192.168.100.122
but its seems like sometimes one or more ip is not reachable and its not consistent
If I deploy additional helm chart with service + loadbalancer it gets another IP which usually is 192.168.100.140. not in a row like the first helm with the 7 pods, doesn't matter if it deployed immediately after the first helm chart. and the external IP address is not accessible at all
Any Idea how to solve this issues so all the IP address will be accessible ?
attached configuration
eip.yaml
apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:
annotations:
name: namespaces-elastic-pool
spec:
address: 192.168.100.100-192.168.100.254
interface: bond0
protocol: layer2
service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
lb.kubesphere.io/v1alpha1: openelb
protocol.openelb.kubesphere.io/v1alpha1: layer2
eip.openelb.kubesphere.io/v1alpha2: namespaces-elastic-pool
name: dev-mysql
spec:
selector:
app: dev-mysql
ports:
port: 3306
targetPort: 3306
type: LoadBalancer
Beta Was this translation helpful? Give feedback.
All reactions