-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
analyzer seems to fight with build_runner, causing build_runner to take longer to complete #49427
Comments
We've also had users report that they must close their IDE before running build_cleaner or before upgrading pub dependencies across packages, because otherwise the IDE hangs and DAS memory creeps up. |
I suspect @srawlins is correct and this is caused by filewatching and rereading. It may even be that something deeply disturbing is going on where build_runner is writing a file, analysis server detects the change, then reruns a bunch of analysis and writes a summary file or other temp data, which might then trigger something in build_runner (after all, I know parts of the build_runner can use analysis driver code to do things) and we could be in a drag race between the processes. strace output of one or both while this is happening (though it will likely be enormous) would probably provide some useful data to track down which files are being opened/reopened by both. Checking top while the processes are running and observing where the process is spending time (user, system, or blocked on IO) will help you to figure out whether we're thrashing the physical disk. My guess is since it is an exponential and that curve seems to follow even where the working set is smallish, that we're just rapidly thrashing the cache instead of the physical disk. |
Just performed another analysis with a barrel file (CC @mraleph), but it might be a little arbitrary. I took the same files above, 3200 files which use json_serializable, and added a "barrel file" half way through which exports the first 1600 files. Then the files in the second half import that barrel file in place of any of the first 1600 files. In this case it seems that DAS watching the files adds 7% to the time to generate files with Perhaps another interesting one would be where the barrel file exports a smaller set of files (say the first 400 or even 200), but then many more files depend on the barrel file. This may be realistic, and may show worse performance, as every |
cc @davidmorgan |
Thanks. This is all under control ;) follow along if you like at dart-lang/build#3811
I'll close this in favour of the |
My mistake, parent issue for new "check DAS interaction" is dart-lang/build#3800; issue is dart-lang/build#3909 |
With some benchmarks I was working on to see how build_runner interacted with DAS, I took down the following numbers (each is the wall time, average of 3 trials):
(If there is a non-linear growth to build_runner, that is something separate, and something I may file with build_runner. I think it's a known problem.)
It occurred to me that I had VS Code running at the same time as these benchmark runs, open to the code base where the files were being generated, and the DAS spinner was spinning, ... a lot. I decided I'd better close VS Code and run the numbers again:
This suggests to me that DAS is causing build_runner to slow down, and the problem is worse with more files (like over 3000 files). I can imagine the file watcher is notifies DAS of tons of changed files, which re-reads files.
Why does it get worse with more files? Do we just not overwhelm the physical disk (whatever is on my 2021 MacBook Pro) until a few thousand files are being thrashed? When the file watcher notifies DAS of a changed file, does DAS re-read more than just that file from disk? Perhaps all files that were known to be downstream of that file (in terms of imports)?
CC @scheglov @bwilkerson @jcollins-g who may be interested
The text was updated successfully, but these errors were encountered: