Add tetra policyfilter listpolicies command#3122
Conversation
d62cd19 to
7ed2acf
Compare
a6e138f to
0350886
Compare
0350886 to
9a9bfd1
Compare
edb12b0 to
1833233
Compare
✅ Deploy Preview for tetragon ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
1833233 to
6d7bb31
Compare
6d7bb31 to
0de3361
Compare
0de3361 to
06d4488
Compare
cmd/tetra/debug/dump.go
Outdated
| fmt.Printf("%d: %s\n", polId, strings.Join(ids, ",")) | ||
| } | ||
|
|
||
| fmt.Println("--- Reverse Map ---") |
There was a problem hiding this comment.
Even if we do not rename the map I find that "Direct" and "Reverse" would make things harder to understand. Let's describe what is the key and what is the value here to make it easier to understand the output.
There was a problem hiding this comment.
I have renamed those to more descriptive headers.
06d4488 to
83d092c
Compare
mtardy
left a comment
There was a problem hiding this comment.
Looks good to me! I'll let Kornilios review
kkourt
left a comment
There was a problem hiding this comment.
Thanks!
Please find some comments below.
bpf/process/policy_filter.h
Outdated
|
|
||
| #define POLICY_FILTER_MAX_POLICIES 128 | ||
| #define POLICY_FILTER_MAX_NAMESPACES 1024 | ||
| #define POLICY_FILTER_MAX_CGROUP_IDS 32768 /* same as polMapSize in policyfilter/state.go */ |
There was a problem hiding this comment.
This seems high to me. According to https://kubernetes.io/docs/setup/best-practices/cluster-large/, a good limit for the number of pods per node is ~100. How about having something 512 or 1024 entries?
There was a problem hiding this comment.
Yes, this makes sense. Just tried to keep that consistent with what we had in policyfilter/state.go.
Changed that to 1024.
This patch introduces an eBPF map that maps cgroupIds to policyIds. This is handled from the user-space in a similar way to policy_filter_maps. This can be used on later PRs to quickly indentify policies that match a specific container or optimize tracing policies. Signed-off-by: Anastasios Papagiannis <anastasios.papagiannis@isovalent.com>
Signed-off-by: Anastasios Papagiannis <anastasios.papagiannis@isovalent.com>
It is useful to have a debug command to indentify which Kubernetes
Identity Aware policies should be applied on a specific container. An
example can be found here:
Create a pod with "app: ubuntu" and "usage: dev" labels.
$ cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: ubuntu
labels:
app: ubuntu
usage: dev
spec:
containers:
- name: ubuntu
image: ubuntu:24.10
command: ["/bin/sleep", "3650d"]
imagePullPolicy: IfNotPresent
restartPolicy: Always
EOF
And apply several policies where some of them match while others don't.
$ cat << EOF | kubectl apply -f -
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: "lseek-podfilter-app"
spec:
podSelector:
matchLabels:
app: "ubuntu"
kprobes:
[...]
EOF
$ cat << EOF | kubectl apply -f -
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: "lseek-podfilter-usage"
spec:
podSelector:
matchLabels:
usage: "dev"
kprobes:
[...]
EOF
$ cat << EOF | kubectl apply -f -
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: "lseek-podfilter-prod"
spec:
podSelector:
matchLabels:
prod: "true"
kprobes:
[...]
EOF
$ cat << EOF | kubectl apply -f -
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: "lseek-podfilter-info"
spec:
podSelector:
matchLabels:
info: "broken"
kprobes:
[...]
EOF
$ cat << EOF | kubectl apply -f -
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: "lseek-podfilter-global"
spec:
kprobes:
[...]
EOF
Based on the labels we expect that policies lseek-podfilter-app and
lseek-podfilter-usage to match on that pod. lseek-podfilter-global is
not a Kubernetes Identity Aware policy so this will be applied in all
cases and we do not report that.
First step is to find the container ID that we care about.
$ kubectl describe pod/ubuntu | grep containerd
Container ID: containerd://ff433e9e16467787a60ac853d9b313150091968731f620776d6d7c514b1e8d6c
And then use it to report all Kubernetes Identity Aware policies that
match.
$ kubectl exec -it ds/tetragon -n kube-system -c tetragon -- tetra policyfilter -r "unix:///procRoot/1/root/run/containerd/containerd.sock" listpolicies ff433e9e16467787a60ac853d9b313150091968731f620776d6d7c514b1e8d6c
ID NAME STATE FILTERID NAMESPACE SENSORS KERNELMEMORY
5 lseek-podfilter-usage enabled 5 (global) generic_kprobe 1.72 MB
1 lseek-podfilter-app enabled 1 (global) generic_kprobe 1.72 MB
We also provide --debug flag to provide more details i.e.:
$ kubectl exec -it ds/tetragon -n kube-system -c tetragon -- tetra policyfilter -r "unix:///procRoot/1/root/run/containerd/containerd.sock" listpolicies ff433e9e16467787a60ac853d9b313150091968731f620776d6d7c514b1e8d6c --debug
time="2024-12-13T09:47:38Z" level=info msg=cgroup path=/run/tetragon/cgroup2/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod189a8053_9f36_4250_bcae_9ed167172920.slice/cri-containerd-ff433e9e16467787a60ac853d9b313150091968731f620776d6d7c514b1e8d6c.scope
time="2024-12-13T09:47:38Z" level=info msg=cgroup id=5695
time="2024-12-13T09:47:39Z" level=debug msg="resolved server address using info file" InitInfoFile=/var/run/tetragon/tetragon-info.json ServerAddress="localhost:54321"
ID NAME STATE FILTERID NAMESPACE SENSORS KERNELMEMORY
1 lseek-podfilter-app enabled 1 (global) generic_kprobe 1.72 MB
5 lseek-podfilter-usage enabled 5 (global) generic_kprobe 1.72 MB
This uses the cgroup-based policy filter map that introduced in a previous
commit that maps cgroupIds to policyIds.
Signed-off-by: Anastasios Papagiannis <anastasios.papagiannis@isovalent.com>
By adding a command line argument (and the appropriate configmap option). Signed-off-by: Anastasios Papagiannis <anastasios.papagiannis@isovalent.com>
83d092c to
fca9fe4
Compare
Add
tetra policyfilter listpoliciesto determine which Kubernetes Identity Aware policies should be applied on a specific container.Example:
Details on how this works can be found on specific commits.