-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chunk training fails for higher core count machines #860
Labels
Bug
Something isn't working/bug f
Comments
I attempted a run with TransLink's model on our own 24 cpu, 250 GB RAM machine using the current ActivitySim "main" branch and got the following results:
Log files are attached: We should not have a case where running in chunk production mode causes an out-of-memory error! Especially when using a chunk_cache.csv that was created on the same machine! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
TransLink was trying to determine the minimum and recommended CPU & memory requirement for our current ABM model by utilizing multi-processing with chunking turned on in ActivitySim. Our testing was done on Azure VMs. We ran into out-of-memory issues for high CPU core count machines, even though there is a lot of memory on the VM.
To Reproduce
We are running our model on v1.2.1. Our input data contains 1.1 million households and 2.88 million persons, with ~1700 TAZs; our model uses a 1-zone system with 48 half-hour time windows. We performed 10 different runs with different combinations of CPU and memory configurations, please see our results below:
*Note that the cpu and memory under the run machine label refers to the size of the VM, not the chunking configuration. num_process and chunk_size columns contains information on our chunking configuration.
Steps to reproduce the behavior:
settings.yaml
to have the correct chunk training mode, number of processes, and chunk size as indicated. Sharrow is off for our model runs. Note that we restricted the number of processes (num_process in the settings.yaml for config_mp) for higher core count machines as a way to avoid high mp overhead.Expected behavior
We should be able to do chunk training on higher-core count machines as long as we have enough memory. We should also be able to use the chunk_cache.csv result from a lower core count and lower memory machine, and have it run successfully on higher spec machines. This does not seem to be the case.
This unexpected behavior for chunk training and chunk production runs to fail makes it impossible for us to determine a minimum and recommended spec, given a wide range of possible VMs and servers our partners and stakeholders could use to run our model.
Screenshots
We have full logs for these test runs. Please reach out to me on Teams or through my emails. I'm happy to send them over to those interested in looking into this.
Here is the memory profile for the no multiprocessing run with 25% sample, keep in mind that we ran our model with activitysim v1.2.1
Additional context
I had a discussion with @dhensle about this issue, and it looks like he is facing some issues with chunk training taking a long time with SANDAG's model. This issue could also be somewhat related to previous issues #543, #683, and #733.
We are working on trying the version of ActivitySim on the main branch. Will keep you posted on any memory profile changes there.
The text was updated successfully, but these errors were encountered: