Suggestions/Improvements while approaching Avalanche #885
Replies: 3 comments 1 reply
-
Hi @francesco-p! Thanks for your feedback! :)
Agreed. I think the examples directory is already providing some kind of support in this sense. Are there any specific features you see are missing in this sense or do we just need to improve the examples made available there?
This is already on our priority list! Very much needed!
We already have pytorchcv as an (optional) dependency in Avalanche. Still, I think you can already use timm as it is already compatible with Avalanche. What features are missing in your opinion in its support? |
Beta Was this translation helpful? Give feedback.
-
Thanks for the feedback, these are all important points.
Can you explain why the reproducible-cl is not adequate for this use case? We have some other examples, but I think starting from a reproducible baseline is better in general, because you have all the hyperparameters already setup according to the best literature results.
I totally agree with this. We need to start adding more examples. You are not greedy, everyone does it ;)
I agree that Avalanche should be compatible with other libraries in the pytorch ecosystem. However, I don't see much value in a thin wrapper. like Vincenzo said, we already wrap pytorchcv. The problem is that there are tens of useful libraries, everyone uses different libraries, and we can't possibly provide thin wrappers around all of them. For example, another library that can be easily integrated with Avalanche is torchmetrics. Another related problems with thin wrappers is that the package would end up with a lot of dependencies, which is not ideal. |
Beta Was this translation helpful? Give feedback.
-
(1). Yeah it is super nice to have scripts to replicate sota methods, but I was thinking more to some "compositional use cases" i.e. better span over avalanche potential. For example, I'm used to FACIL and, while much less powerful, it is very straightforward to use it and the output is very clear (taw, tag matrices). (3). If you want to maintain sota standards then I guess timm is the right choice, because it introduces latest models and weights (which is crucial, we all know the struggle of training beasts). For example they introduce all ViTs variations (small, base, large), along with pretrained weights, they also introduced the last fb thingy https://arxiv.org/abs/2201.03545 . (2). Yes of course I will like to contribute, indeed I'm writing some small examples (and I just spot this small error in docs :D) |
Beta Was this translation helpful? Give feedback.
-
Hello everyone,
I'm trying to approach Avalanche to run some experiments. I have made some considerations that might help the development.
Why not introducing some (real) simple baselines experiments as "template" (I know there is reproducible CL repo, but it is focused on reproduce research instead of spreading how to use Avalanche)
In the documentation it might be helpful to introduce an atomic use case for each class or function you defined (like in Pytorch Documentation). For example for the
nc_benchmark
documentation:I might be greedy but when I approach a new library I want a quick simple example. If you can do it for each module you can do "lego programming" as in Pytorch.
timm
as model provider for Avalanche? It should be easy to do so coz all models are split in backbone + fc... I personally believe it is the SOTA of network architecture, therefore it could be a very appealing feature.Beta Was this translation helpful? Give feedback.
All reactions