-
Notifications
You must be signed in to change notification settings - Fork 29.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory not cleaned up after worker threads are terminated #45685
Comments
Define "as much"? I don't see anything out of the ordinary. Are you getting confused by the main thread's GC growing the JS heap from time to time? |
It seems that memory occuiped by |
worker_threads doesn't use vm but I see what you mean. 289 -> 705 kB over the course of five minutes isn't that worrisome, that's probably the JIT's optimizing compiler kicking in. |
I have a more complex long running application with multiple workers. The application logic is as follows:
The memory has grown from ~400MB to ~1GB over two weeks. I don't know the cause for it yet, but from the POC I suspect it might might be from memory occupied by |
Closing this issue, since the bug was elsewhere, with the reference to a terminated worker not being cleaned up. // memory leaking version
const { memoryUsage } = require('node:process')
const { Worker, isMainThread } = require('node:worker_threads')
if (isMainThread) {
const workers = []
let counter = 0
console.log(++counter, JSON.stringify(memoryUsage()))
setInterval(() => {
console.log(++counter, JSON.stringify(memoryUsage()))
}, 5000)
setInterval(() => {
const worker = new Worker(__filename)
worker.on('online', () => {
worker.terminate()
})
workers.push(worker) // keeps reference to terminated worker worker
}, 100)
} else {
// some operation
const x = 3
} When running the poc code in the original pos, the heapTotal eventually maxed out at 12615680 bytes and did not grow anymore. On the other hand, when running the buggier version, the heapTotal exceeded 12615680 bytes quickly and continued to grow. Thank you both @bnoordhuis @ywave620 |
Version
v16.14.0
Platform
Darwin akays.local 22.1.0 Darwin Kernel Version 22.1.0: Sun Oct 9 20:14:30 PDT 2022; root:xnu-8792.41.9~2/RELEASE_ARM64_T8103 arm64
Subsystem
worker_threads
What steps will reproduce the bug?
How often does it reproduce? Is there a required condition?
Always reproducible. Also tested on node v19.0.0
What is the expected behavior?
Memory usage should not grow as much.
What do you see instead?
Memory usage continues to grow.
Additional information
Initial memory usage
Memory usage after ~5 minutes
The text was updated successfully, but these errors were encountered: