Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

asynchronous launch #45

Open
dgutson opened this issue May 14, 2023 · 6 comments
Open

asynchronous launch #45

dgutson opened this issue May 14, 2023 · 6 comments

Comments

@dgutson
Copy link

dgutson commented May 14, 2023

How can I enqueue without locking?

For example
ts "/usr/bin/cat /dev/random > /dev/null"

Maybe it might be because I have the queue "full"?

My usecase is that I need to enqueue 3k tasks, and I want the enqueueing process to be asynchronous. I assume that just using & to launch them in background is a hack and it should be a better way.

If this is not possible, then my feature request is to add a flag to ensure that the enqueueing will return immediately.

@justanhduc
Copy link
Owner

Hey @dgutson. Sorry for the late response. I don't see any problem with using & and send stdout to /dev/null. But yeah it's possible to make a feature to send it to background. I will work on it when I can find some time.

@dgutson
Copy link
Author

dgutson commented May 25, 2023

I found a workaround as setting TS_MAXCONN to large numbers (eg 10000), but I don't know if it is correct and how to set it unlimited.

@justanhduc
Copy link
Owner

TS_MAXCONN is to limit the number of unfinished jobs. Basically the max value depends on ulimit for open file descriptor. Setting it to a large number is fine as you can't queue too many jobs due to ulimit anw.

@dgutson
Copy link
Author

dgutson commented May 27, 2023

My idea is that queued jobs don't consume any resources until they are run, that's my rationale about unlimited queued stuff. Imagine I have several hundred thousand files to process during a weekend, I should be able to enqueue all of them quietly waiting to be ran at some point when there's a free slot.
Why just them being waiting should consume any resource at all, for example an open file descriptor?

@justanhduc
Copy link
Owner

In order to queue, the client (your job) needs to contact the server, which opens a socket. The max number of sockets is limited by TS_MAXCONN, and also ulimit. You can search for unix domains protocol for more details. So yes, you do open a lot of files just by queuing a lot of jobs.

@dgutson
Copy link
Author

dgutson commented May 27, 2023

Maybe I'm thinking in localhost only, where what I enqueue is just the shell command, which is a text at the end of the day.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants