Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding a second dedicated network interface for longhorn replication #135

Open
sushyad opened this issue Jan 26, 2024 · 4 comments
Open

Comments

@sushyad
Copy link

sushyad commented Jan 26, 2024

I am trying to add a second network interface dedicated for longhorn replication using multus cni plugin together with ipvlan. Here is my PR from my fork to give you an idea what I am trying to do: #134

I was able to tweak the argocd recipe to:

  • Bring up the second interface and assign it an ip address. I am able to ping the nodes on their secondary ip addresses from each other.
  • Add ipvlan plugin to /opt/cni/bin folder on each node.
  • Using this as a guide I was able to add multus configuration using argocd.

When I create a test pod wth two network interfaces it doesn't work and doesn't show the second interface as expected.

cat <<EOF | kubectl apply -f - 
apiVersion: v1
kind: Pod
metadata:
  name: app1
  annotations:
    k8s.v1.cni.cncf.io/networks: multus-conf
spec:
  containers:
  - name: app1
    command: ["/bin/sh", "-c", "trap : TERM INT; sleep infinity & wait"]
    image: alpine
EOF
kubectl describe pod app1

gives

bash-5.2# kubectl describe pod app1 
Name:             app1
Namespace:        default
Priority:         0
Service Account:  default
Node:             metal0/192.168.0.115
Start Time:       Fri, 26 Jan 2024 19:49:49 +0000
Labels:           <none>
Annotations:      k8s.v1.cni.cncf.io/networks: multus-conf
Status:           Running
IP:               10.0.0.176
IPs:
  IP:  10.0.0.176
Containers:
......

instead of something lke this:

$ kubectl describe pod app1
Name:             app1
Namespace:        default
Priority:         0
Service Account:  default
Node:             node2/192.168.200.175
Start Time:       Fri, 11 Aug 2023 12:28:56 +0300
Labels:           <none>
Annotations:      k8s.v1.cni.cncf.io/network-status:
                    [{
                        "name": "mynet",
                        "interface": "eth0",
                        "ips": [
                            "10.244.2.8"
                        ],
                        "mac": "86:69:28:4f:54:b3",
                        "default": true,
                        "dns": {},
                        "gateway": [
                            "10.244.2.1"
                        ]
                    },{
                        "name": "default/multus-conf",
                        "interface": "net1",
                        "ips": [
                            "192.168.200.100"
                        ],
                        "mac": "2a:1b:4d:89:66:c0",
                        "dns": {}
                    }]
                  k8s.v1.cni.cncf.io/networks: multus-conf
Status:           Running
IP:               10.244.2.8
IPs:
  IP:  10.244.2.8
Containers:
.....

Has anyone tried to do this before?

@khuedoan
Copy link
Owner

I don't have multiple NIC to reproduce but this is probably related cilium/cilium#23483

@khuedoan
Copy link
Owner

If this feature is important to you I think you can remove Cilium and use the default k3s CNI (Flannel), which seems to work with Multus

You can reference commits before 9f0d389 (install Cilium) and 65af4ff (remove MetalLB)

The disadvantage is that you may miss out on some future features that rely on eBPF.

@pandabear41
Copy link

pandabear41 commented Jan 29, 2024

I have reproduced this as well. Cilium features on paper are better, but they seem to lack for me vs Flannel or Calico.
I reverted back to default k3s CNI with PureLB (for now) with plans to test out Calico and their eBPF feature soon.

These three major issues I faced:

@khuedoan
Copy link
Owner

khuedoan commented Feb 8, 2024

IPv6 has a separate tracking issue #114

For this issue, I'm not sure if there's anything I can do on my end since I don't have or use multiple NICs. As far as I understand, there are two options:

I'll leave this issue open for now in case someone has the same use case, but there's no action for it in this project.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants