Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to print metadata information #148

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

adrianreber
Copy link
Member

This adds an option to print out container metadata. In the case of CRI-O this can look like:

├── Metadata
│   ├── Name: counters
│   ├── Namespace: default
│   └── Annotations
│       ├── io.kubernetes.cri-o.ImageName: quay.io/adrianreber/counter:latest

Currently this contains all annotations from the container.

@codecov-commenter
Copy link

codecov-commenter commented Oct 17, 2024

Codecov Report

Attention: Patch coverage is 42.30769% with 30 lines in your changes missing coverage. Please review.

Project coverage is 77.76%. Comparing base (d2276c1) to head (692adc3).

Files with missing lines Patch % Lines
internal/tree.go 33.33% 28 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #148      +/-   ##
==========================================
- Coverage   81.51%   77.76%   -3.75%     
==========================================
  Files          11       11              
  Lines        1082     1417     +335     
==========================================
+ Hits          882     1102     +220     
- Misses        128      241     +113     
- Partials       72       74       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link

github-actions bot commented Oct 17, 2024

Test Results

60 tests  ±0   60 ✅ ±0   1s ⏱️ ±0s
 1 suites ±0    0 💤 ±0 
 1 files   ±0    0 ❌ ±0 

Results for commit 692adc3. ± Comparison against base commit d2276c1.

♻️ This comment has been updated with latest results.

internal/tree.go Outdated Show resolved Hide resolved
@rst0git
Copy link
Member

rst0git commented Oct 17, 2024

@adrianreber Would it make sense to update the examples in the README file?

The metadata seems very useful:

Displaying container checkpoint tree view from /var/lib/kubelet/checkpoints/checkpoint-cuda-counter_default-cuda-counter-2024-08-08T16:44:46+01:00.tar

cuda-counter
├── Image: quay.io/radostin/cuda-counter:latest
├── ID: fb34c2731139904b498bc00c594ad467f356bb2d3d0ef54ceb5f42e3255ef716
├── Runtime: nvidia
├── Created: 2024-08-08T16:44:01.20924199+01:00
├── Checkpointed: 2024-08-08T16:44:46+01:00
├── Engine: CRI-O
├── IP: 10.85.0.41
├── Checkpoint size: 181.9 MiB
│   └── Memory pages size: 181.9 MiB
├── Root FS diff size: 43.0 KiB
├── CRIU dump statistics
│   ├── Freezing time: 100.66 ms
│   ├── Frozen time: 191.65 ms
│   ├── Memdump time: 72.429 ms
│   ├── Memwrite time: 65.183 ms
│   ├── Pages scanned: 44202
│   └── Pages written: 44007
├── Metadata
│   ├── Name: cuda-counter
│   ├── Namespace: default
│   └── Annotations
│       ├── io.kubernetes.cri-o.PlatformRuntimePath: 
│       ├── io.kubernetes.pod.name: cuda-counter
│       ├── kubectl.kubernetes.io/last-applied-configuration
│       │   ├── apiVersion: v1
│       │   ├── kind: Pod
│       │   ├── metadata: map[annotations:map[] name:cuda-counter namespace:default]
│       │   └── spec: map[containers:[map[image:quay.io/radostin/cuda-counter:latest name:cuda-counter resources:map[limits:map[nvidia.com/gpu:%!s(float64=1)]]]] restartPolicy:OnFailure]
│       ├── kubernetes.io/config.source: api
│       ├── io.kubernetes.container.hash: ee5dc06f
│       ├── io.kubernetes.cri-o.Annotations
│       │   ├── nvidia.cdi.k8s.io/nvidia-device-plugin_b2f28340-72b0-4b65-99df-4ff16e97afba: k8s.device-plugin.nvidia.com/gpu=GPU-0e6d5bd3-8a62-b1fc-afa4-ff9bcd7e8d69
│       │   ├── io.kubernetes.container.hash: ee5dc06f
│       │   ├── io.kubernetes.container.restartCount: 0
│       │   ├── io.kubernetes.container.terminationMessagePath: /dev/termination-log
│       │   ├── io.kubernetes.container.terminationMessagePolicy: File
│       │   └── io.kubernetes.pod.terminationGracePeriod: 30
│       ├── io.kubernetes.cri-o.Labels
│       │   ├── io.kubernetes.pod.name: cuda-counter
│       │   ├── io.kubernetes.pod.namespace: default
│       │   ├── io.kubernetes.pod.uid: e271eb45-8f82-4657-803c-d3268fbb1c02
│       │   └── io.kubernetes.container.name: cuda-counter
│       ├── io.kubernetes.cri-o.LogPath: /var/log/pods/default_cuda-counter_e271eb45-8f82-4657-803c-d3268fbb1c02/cuda-counter/0.log
│       ├── io.kubernetes.cri-o.SeccompProfilePath: 
│       ├── io.kubernetes.pod.namespace: default
│       ├── io.kubernetes.container.restartCount: 0
│       ├── io.kubernetes.cri-o.IP.0: 10.85.0.41
│       ├── io.kubernetes.cri-o.Name: k8s_cuda-counter_cuda-counter_default_e271eb45-8f82-4657-803c-d3268fbb1c02_0
│       ├── io.kubernetes.cri-o.SandboxName: k8s_cuda-counter_default_e271eb45-8f82-4657-803c-d3268fbb1c02_0
│       ├── io.kubernetes.container.terminationMessagePolicy: File
│       ├── io.kubernetes.cri-o.TTY: false
│       ├── io.kubernetes.cri-o.Volumes
│       │   ├── /etc/hosts
│       │   │   ├── host path: /var/lib/kubelet/pods/e271eb45-8f82-4657-803c-d3268fbb1c02/etc-hosts
│       │   │   ├── read-only: false
│       │   │   ├── selinux relabel: false
│       │   │   ├── recursive read-only: false
│       │   │   └── propagation: 0
│       │   ├── /dev/termination-log
│       │   │   ├── host path: /var/lib/kubelet/pods/e271eb45-8f82-4657-803c-d3268fbb1c02/containers/cuda-counter/d151983a
│       │   │   ├── read-only: false
│       │   │   ├── selinux relabel: false
│       │   │   ├── recursive read-only: false
│       │   │   └── propagation: 0
│       │   └── /var/run/secrets/kubernetes.io/serviceaccount
│       │       ├── host path: /var/lib/kubelet/pods/e271eb45-8f82-4657-803c-d3268fbb1c02/volumes/kubernetes.io~projected/kube-api-access-b7fhk
│       │       ├── read-only: true
│       │       ├── selinux relabel: false
│       │       ├── recursive read-only: false
│       │       └── propagation: 0
│       ├── io.kubernetes.pod.terminationGracePeriod: 30
│       ├── io.kubernetes.cri-o.Image: quay.io/radostin/cuda-counter@sha256:087eb83a88424a27bcc034ed83968f5f3e82c652737e945a98918b886cc296f9
│       ├── org.systemd.property.After: ['crio.service']
│       ├── org.systemd.property.TimeoutStopUSec: uint64 30000000
│       ├── kubernetes.io/config.seen: 2024-08-08T16:43:59.891435145+01:00
│       ├── io.kubernetes.cri-o.IP.1: 1100:200::29
│       ├── io.kubernetes.cri-o.ImageName: quay.io/radostin/cuda-counter:latest
│       ├── io.kubernetes.cri-o.Metadata
│       │   └── name: cuda-counter
│       ├── io.kubernetes.cri-o.ResolvPath: /run/containers/storage/overlay-containers/df3cff04a90a6342ba76b6be093d1ae7455a788a5cda2bdb357fdb787c3e17ab/userdata/resolv.conf
│       ├── io.kubernetes.container.name: cuda-counter
│       ├── io.kubernetes.container.terminationMessagePath: /dev/termination-log
│       ├── io.kubernetes.cri-o.ContainerID: fb34c2731139904b498bc00c594ad467f356bb2d3d0ef54ceb5f42e3255ef716
│       ├── io.kubernetes.cri-o.ContainerType: container
│       ├── io.kubernetes.cri-o.SandboxID: df3cff04a90a6342ba76b6be093d1ae7455a788a5cda2bdb357fdb787c3e17ab
│       ├── io.kubernetes.cri-o.StdinOnce: false
│       ├── io.kubernetes.pod.uid: e271eb45-8f82-4657-803c-d3268fbb1c02
│       ├── nvidia.cdi.k8s.io/nvidia-device-plugin_b2f28340-72b0-4b65-99df-4ff16e97afba: k8s.device-plugin.nvidia.com/gpu=GPU-0e6d5bd3-8a62-b1fc-afa4-ff9bcd7e8d69
│       ├── io.kubernetes.cri-o.ImageRef: 5682bd8f332b6cb21a17bca938cb1107be40b528554b19f26c45bcf3737a3690
│       ├── io.kubernetes.cri-o.MountPoint: /var/lib/containers/storage/overlay/995bc5749a94f6ca166a45cfe9c8566db2944a43c66b2ea0ca0e2c82c19409b1/merged
│       ├── org.systemd.property.DefaultDependencies: true
│       ├── io.container.manager: cri-o
│       ├── io.kubernetes.cri-o.Created: 2024-08-08T16:44:01.20924199+01:00
│       ├── io.kubernetes.cri-o.Stdin: false
│       └── org.systemd.property.CollectMode: 'inactive-or-failed'
├── Process tree
│   └── [1]  /bin/sh -c /benchmark/main 
│       ├── Environment variables
│       │   ├── NV_LIBCUBLAS_VERSION=12.5.3.2-1
│       │   ├── KUBERNETES_SERVICE_PORT_HTTPS=443
│       │   ├── NVIDIA_VISIBLE_DEVICES=GPU-0e6d5bd3-8a62-b1fc-afa4-ff9bcd7e8d69
│       │   ├── NV_NVML_DEV_VERSION=12.5.82-1
│       │   ├── KUBERNETES_SERVICE_PORT=443
│       │   ├── HOSTNAME=cuda-counter
│       │   ├── NVIDIA_REQUIRE_CUDA=cuda>=12.5 brand=unknown,driver>=470,driver<471 brand=grid,driver>=470,driver<471 brand=tesla,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=vapps,driver>=470,driver<471 brand=vpc,driver>=470,driver<471 brand=vcs,driver>=470,driver<471 brand=vws,driver>=470,driver<471 brand=cloudgaming,driver>=470,driver<471 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551
│       │   ├── NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-5=12.5.3.2-1
│       │   ├── NV_NVTX_VERSION=12.5.82-1
│       │   ├── NV_CUDA_CUDART_DEV_VERSION=12.5.82-1
│       │   ├── NV_LIBCUSPARSE_VERSION=12.5.1.3-1
│       │   ├── NV_LIBNPP_VERSION=12.3.0.159-1
│       │   ├── PWD=/
│       │   ├── NVIDIA_DRIVER_CAPABILITIES=compute,utility
│       │   ├── NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-5=12.5.82-1
│       │   ├── NV_LIBNPP_PACKAGE=libnpp-12-5=12.3.0.159-1
│       │   ├── NV_LIBCUBLAS_DEV_VERSION=12.5.3.2-1
│       │   ├── NVIDIA_PRODUCT_NAME=CUDA
│       │   ├── NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-5
│       │   ├── NV_CUDA_CUDART_VERSION=12.5.82-1
│       │   ├── HOME=/root
│       │   ├── KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
│       │   ├── CUDA_VERSION=12.5.1
│       │   ├── NV_LIBCUBLAS_PACKAGE=libcublas-12-5=12.5.3.2-1
│       │   ├── NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-5=12.5.1-1
│       │   ├── NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-5=12.3.0.159-1
│       │   ├── NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-5
│       │   ├── NV_LIBNPP_DEV_VERSION=12.3.0.159-1
│       │   ├── TERM=xterm
│       │   ├── NV_LIBCUSPARSE_DEV_VERSION=12.5.1.3-1
│       │   ├── LIBRARY_PATH=/usr/local/cuda/lib64/stubs
│       │   ├── SHLVL=0
│       │   ├── NV_CUDA_LIB_VERSION=12.5.1-1
│       │   ├── NVARCH=x86_64
│       │   ├── KUBERNETES_PORT_443_TCP_PROTO=tcp
│       │   ├── KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
│       │   ├── LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
│       │   ├── NV_CUDA_NSIGHT_COMPUTE_VERSION=12.5.1-1
│       │   ├── KUBERNETES_SERVICE_HOST=10.96.0.1
│       │   ├── NV_NVPROF_VERSION=12.5.82-1
│       │   ├── KUBERNETES_PORT=tcp://10.96.0.1:443
│       │   ├── KUBERNETES_PORT_443_TCP_PORT=443
│       │   ├── CUDA_HOME=/usr/local/cuda
│       │   └── PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
│       ├── Open files
│       │   ├── [REG 0]  /dev/null
│       │   ├── [PIPE 1]  pipe[118147]
│       │   ├── [PIPE 2]  pipe[118148]
│       │   ├── [cwd]  /
│       │   └── [root]  /
│       └── [27]  /benchmark/main 
│           ├── Environment variables
│           │   ├── LIBRARY_PATH=/usr/local/cuda/lib64/stubs
│           │   ├── NV_LIBCUBLAS_VERSION=12.5.3.2-1
│           │   ├── KUBERNETES_SERVICE_PORT=443
│           │   ├── NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-5=12.5.82-1
│           │   ├── KUBERNETES_PORT=tcp://10.96.0.1:443
│           │   ├── NV_CUDA_NSIGHT_COMPUTE_VERSION=12.5.1-1
│           │   ├── HOSTNAME=cuda-counter
│           │   ├── SHLVL=0
│           │   ├── LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
│           │   ├── HOME=/root
│           │   ├── NV_LIBCUBLAS_DEV_VERSION=12.5.3.2-1
│           │   ├── NV_LIBNPP_PACKAGE=libnpp-12-5=12.3.0.159-1
│           │   ├── CUDA_VERSION=12.5.1
│           │   ├── NV_NVPROF_VERSION=12.5.82-1
│           │   ├── NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-5
│           │   ├── NVIDIA_REQUIRE_CUDA=cuda>=12.5 brand=unknown,driver>=470,driver<471 brand=grid,driver>=470,driver<471 brand=tesla,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=vapps,driver>=470,driver<471 brand=vpc,driver>=470,driver<471 brand=vcs,driver>=470,driver<471 brand=vws,driver>=470,driver<471 brand=cloudgaming,driver>=470,driver<471 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551
│           │   ├── NV_LIBCUSPARSE_VERSION=12.5.1.3-1
│           │   ├── NVIDIA_DRIVER_CAPABILITIES=compute,utility
│           │   ├── NV_CUDA_LIB_VERSION=12.5.1-1
│           │   ├── NV_NVML_DEV_VERSION=12.5.82-1
│           │   ├── NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-5=12.3.0.159-1
│           │   ├── TERM=xterm
│           │   ├── NV_CUDA_CUDART_VERSION=12.5.82-1
│           │   ├── KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
│           │   ├── PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
│           │   ├── NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-5
│           │   ├── NV_LIBCUBLAS_PACKAGE=libcublas-12-5=12.5.3.2-1
│           │   ├── NVARCH=x86_64
│           │   ├── KUBERNETES_PORT_443_TCP_PORT=443
│           │   ├── NV_LIBCUSPARSE_DEV_VERSION=12.5.1.3-1
│           │   ├── KUBERNETES_PORT_443_TCP_PROTO=tcp
│           │   ├── NVIDIA_PRODUCT_NAME=CUDA
│           │   ├── NV_CUDA_CUDART_DEV_VERSION=12.5.82-1
│           │   ├── NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-5=12.5.3.2-1
│           │   ├── NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-5=12.5.1-1
│           │   ├── KUBERNETES_SERVICE_PORT_HTTPS=443
│           │   ├── NV_NVTX_VERSION=12.5.82-1
│           │   ├── NV_LIBNPP_VERSION=12.3.0.159-1
│           │   ├── KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
│           │   ├── PWD=/
│           │   ├── KUBERNETES_SERVICE_HOST=10.96.0.1
│           │   ├── CUDA_HOME=/usr/local/cuda
│           │   ├── NVIDIA_VISIBLE_DEVICES=GPU-0e6d5bd3-8a62-b1fc-afa4-ff9bcd7e8d69
│           │   └── NV_LIBNPP_DEV_VERSION=12.3.0.159-1
│           ├── Open files
│           │   ├── [REG 0]  /dev/null
│           │   ├── [PIPE 1]  pipe[118147]
│           │   ├── [PIPE 2]  pipe[118148]
│           │   ├── [EVENTFD 3]  EVENTFD.21
│           │   ├── [PIPE 4]  pipe[121971]
│           │   ├── [PIPE 5]  pipe[121971]
│           │   ├── [PIPE 6]  pipe[121972]
│           │   ├── [PIPE 7]  pipe[121972]
│           │   ├── [EVENTFD 9]  EVENTFD.26
│           │   ├── [EVENTFD 10]  EVENTFD.26
│           │   ├── [UNIXSK 14]  unix[121981 (0) cuda-uvmfd-4026533705-27]
│           │   ├── [EVENTFD 15]  EVENTFD.28
│           │   ├── [EVENTFD 17]  EVENTFD.26
│           │   ├── [EVENTFD 19]  EVENTFD.26
│           │   ├── [EVENTFD 21]  EVENTFD.26
│           │   ├── [EVENTFD 23]  EVENTFD.26
│           │   ├── [EVENTFD 24]  EVENTFD.29
│           │   ├── [EVENTFD 26]  EVENTFD.26
│           │   ├── [EVENTFD 28]  EVENTFD.26
│           │   ├── [EVENTFD 30]  EVENTFD.26
│           │   ├── [EVENTFD 32]  EVENTFD.26
│           │   ├── [PIPE 33]  pipe[121984]
│           │   ├── [PIPE 34]  pipe[121984]
│           │   ├── [EVENTFD 35]  EVENTFD.26
│           │   ├── [cwd]  /
│           │   └── [root]  /
│           └── Open sockets
│               └── [UNIX (SEQPACKET)]  cuda-uvmfd-4026533705-27
└── Overview of mounts
    ├── Destination: /proc
    │   ├── Type: proc
    │   └── Source: proc
    ├── Destination: /dev
    │   ├── Type: tmpfs
    │   └── Source: tmpfs
    ├── Destination: /dev/pts
    │   ├── Type: devpts
    │   └── Source: devpts
    ├── Destination: /dev/mqueue
    │   ├── Type: mqueue
    │   └── Source: mqueue
    ├── Destination: /sys
    │   ├── Type: sysfs
    │   └── Source: sysfs
    ├── Destination: /sys/fs/cgroup
    │   ├── Type: cgroup
    │   └── Source: cgroup
    ├── Destination: /dev/shm
    │   ├── Type: bind
    │   └── Source: /run/containers/storage/overlay-containers/df3cff04a90a6342ba76b6be093d1ae7455a788a5cda2bdb357fdb787c3e17ab/userdata/shm
    ├── Destination: /etc/resolv.conf
    │   ├── Type: bind
    │   └── Source: /run/containers/storage/overlay-containers/df3cff04a90a6342ba76b6be093d1ae7455a788a5cda2bdb357fdb787c3e17ab/userdata/resolv.conf
    ├── Destination: /etc/hostname
    │   ├── Type: bind
    │   └── Source: /run/containers/storage/overlay-containers/df3cff04a90a6342ba76b6be093d1ae7455a788a5cda2bdb357fdb787c3e17ab/userdata/hostname
    ├── Destination: /run/.containerenv
    │   ├── Type: bind
    │   └── Source: /run/containers/storage/overlay-containers/df3cff04a90a6342ba76b6be093d1ae7455a788a5cda2bdb357fdb787c3e17ab/userdata/.containerenv
    ├── Destination: /etc/hosts
    │   ├── Type: bind
    │   └── Source: /var/lib/kubelet/pods/e271eb45-8f82-4657-803c-d3268fbb1c02/etc-hosts
    ├── Destination: /dev/termination-log
    │   ├── Type: bind
    │   └── Source: /var/lib/kubelet/pods/e271eb45-8f82-4657-803c-d3268fbb1c02/containers/cuda-counter/d151983a
    ├── Destination: /run/secrets
    │   ├── Type: bind
    │   └── Source: /run/containers/storage/overlay-containers/fb34c2731139904b498bc00c594ad467f356bb2d3d0ef54ceb5f42e3255ef716/userdata/run/secrets
    └── Destination: /var/run/secrets/kubernetes.io/serviceaccount
        ├── Type: bind
        └── Source: /var/lib/kubelet/pods/e271eb45-8f82-4657-803c-d3268fbb1c02/volumes/kubernetes.io~projected/kube-api-access-b7fhk

This adds an option to print out container metadata. In the case of
CRI-O this can look like:

├── Metadata
│   ├── Name: counters
│   ├── Namespace: default
│   └── Annotations
│       ├── io.kubernetes.cri-o.ImageName: quay.io/adrianreber/counter:latest

Currently this contains all annotations from the container.

Signed-off-by: Adrian Reber <[email protected]>
The output of '--all' changes with the introduction of additional
information. This removes two hardcoded lines and adds it to the section
that the line just needs to exist somewhere.

Signed-off-by: Adrian Reber <[email protected]>
@adrianreber
Copy link
Member Author

@rst0git updated, please take a look

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants