-
Using a Docker-based template that has access to GPU, If we leave the workspace in a running state and do not use the GPU actively. GPU becomes unavailable after some time. (DL) atif@workspace:~$ nvidia-smi
Failed to initialize NVML: Unknown Error This might be because Similar issuesNVIDIA/nvidia-docker#1671 |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
@matifali since this isn't a Coder specific issue, I converted this into a discussion so the community can chip in with potential solutions for your issue. |
Beta Was this translation helpful? Give feedback.
-
removing resource "docker_container" "workspace" {
count = data.coder_workspace.me.start_count
image = docker_image.aihwkit.image_id
cpu_shares = var.cpu
memory = "${var.ram*1024}"
gpus = "all"
# Uses lower() to avoid Docker restriction on container names.
name = "coder-${data.coder_workspace.me.owner}-${lower(data.coder_workspace.me.name)}"
# Hostname makes the shell more user friendly: coder@my-workspace:~$
hostname = lower(data.coder_workspace.me.name)
dns = ["1.1.1.1"]
# Use the docker gateway if the access URL is 127.0.0.1
command = ["sh", "-c", replace(coder_agent.dev.init_script, "127.0.0.1", "host.docker.internal")]
env = ["CODER_AGENT_TOKEN=${coder_agent.dev.token}"]
host {
host = "host.docker.internal"
ip = "host-gateway"
}
} But after every coder update, we have to run |
Beta Was this translation helpful? Give feedback.
removing
runtime
and usinggpus
indocker_container
fixed it for me.