You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@MichalGuzieniuk all containers in the pod share same network namespace, so I think adding device resource to first container itself would be sufficient.
@MichalGuzieniuk all containers in the pod share same network namespace, so I think adding device resource to first container itself would be sufficient.
Yes, this is true for kernel interface, every container inside Pod namespace would be able to see the attached sriov device, so it becomes less important on which container the resource is injected to.
@MichalGuzieniuk We do have a problem when the resource is userspace device (e.g. DPDK), in which case, the userspace device mounted in the first container is invisible to other containers (in the same pod). There is no solution to address this issue yet.
@MichalGuzieniuk We do have a problem when the resource is userspace device (e.g. DPDK), in which case, the userspace device mounted in the first container is invisible to other containers (in the same pod). There is no solution to address this issue yet.
NRI mounts k8s.v1.cni.cncf.io/network-status annotation (which includes device-info) into all containers. wouldn't that suffice (example: to find out pci address of VF) for userspace device ?
NRI adds limits and requests to resources section only to first container within POD. Second container resources are not affected.
Test case:
First container
Second container
Current result:
NRI logs
The text was updated successfully, but these errors were encountered: