Replies: 1 comment 6 replies
-
Hi @plvaudry, that's a really good question. I think there's not a definitive answer for all the cases. Ideally, all the learners, transformers, and so on should be robust against such changes. Luckily, the majority of them already are. @MaxHalford wrote a nice set of tests to check these properties. I do know that k-NN models and I am more knowledgeable of the I think @MaxHalford, @jacobmontiel, @raphaelsty, @gbolmier, @VaysseRobin, @AdilZouitine, and @hoanganhngo610 can give you more hints about other modules. I believe documenting these details is a task worth putting in River's roadmap. :) |
Beta Was this translation helpful? Give feedback.
-
It seems implicit that, because each input is represented by a dict, the feature set could vary over time. Some features could sometimes be missing. Some features could appear or disappear over time. Also, I noticed that some feature selection methods have been implemented, so there is yet another way the feature set could vary.
However, I have not found documentation on how implemented models behave in such circumstances.
If the behaviour varies from model to model, what are the differences?
Beta Was this translation helpful? Give feedback.
All reactions