-
-
Notifications
You must be signed in to change notification settings - Fork 404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting frustrated due to infinite loops. #2633
Comments
It's not PDM's fault: I tried installing these dependencies using pip and it takes a long time too. Especially with big packages such as torch (750MB), nvidia-cublas (410MB), nvidia-cudnn (730MB), etc. A more constructive approach would be to try and reduce this set of dependencies to highlight the problematic ones, as an example of dependency tree that takes time to resolve. These examples can then be used to try and find optimizations in the libraries responsible for resolving dependencies. I have added the Once a minimal set of dependencies has been identified, another constructive approach can be to reach out to the maintainers of the problematic dependencies, to kindly ask if it's possible to make their own dependency specifications less strict, or more strict depending on the situation, as to help resolvers find a solution more quickly. |
is there any way to build a dependency resolution system ( via pdm) without downloading and resolving to try and find a match? |
This is happening again. I think we should have a recursive limit on how many tries till fail. I was expecting the system to be deployed properly and slept but when i woke up the deployment is broken. I would stop if it taking too long. So I think an option how many retries on dependency resolution attempt should be good. |
Why do you think there isn't? https://arc.net/l/quote/gdeikbxb |
Thanks gonna try |
That would be too small for projects with more than 10 big dependencies, a round is smaller than you'd think |
BTW, boto3 and aws families are tough ones for dependency resolution, since they have a rather strict version range restricting each other. Try using a more accurate version range for these packages. |
i removed them and now , stucking at prompthub-py ⠼ Resolving: new pin prompthub-py 4.0.0 Quite long now. here is my pkgs , i removed all version restrictions too
it comes form farm-haystack Thank you very much for prompt replies. |
still couldn't resolve . are there any way to know what is exactly screwing this up? |
add -v to enable terminal logging and you will probably spot some packages trying to be resolved repeatedly and why they are rejected |
Found and fix first problem it was due to linters with version being locked. And now leads to another , this time weird error
but: dosen't mention pydantic <=2 ... why its making it up. |
looks like a bug , this definitely is an infinite loop. |
It does in version 1.25.5: https://github.com/deepset-ai/haystack/blob/a8bc7551aeb2036f87cb2a33743f3c2f71b9be52/pyproject.toml#L51. It looks like an infinite loop but it's probably actually trying to prune branches of a huge tree 😕 Always the same libs who are problematic 😅 |
ah , it is fixed in latest master then. |
You can override the resolver to force specific versions of specific packages. |
You could also disallow |
Thanks alot , gonna try overriding. |
Getting an infinite loop on my first try to pdm 😢.
Now: pdm add strawberry-graphql-django Infinite loop. |
Try relaxing some of your pins. |
I would look at asgiref too: https://github.com/strawberry-graphql/strawberry-django/blob/main/pyproject.toml#L36. |
Sorry, I would consider this a well-designed bad case. You can find bad cases in every dependency resolver. However, you can easily spot it out by looking at the locking log via |
This had happen several time when i am using PDM .
Wasted 2 days trying to fix it. I am going to give up PDM soon at this rate.
Here are the depedencies
pdm getting stucked in infinite resolution loop at s3transfers
The text was updated successfully, but these errors were encountered: