You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Given baseline sklearn.Pipeline (both handling + model), the competitors pipelines, and scoring; outputs the plots and some extra info. (Assuming that it will run 10 repetitions of 10-fold cross-validation setup.
If the sklearn.Pipeline is too costly to train, for instance keras.Model, what should we do?
The text was updated successfully, but these errors were encountered:
A better adaptability to the prior distribution choice. The proposed framework should not only consider the choice of a conjugate prior, and even less the "matching prior".
Is the choice of the matching prior really adequate for comparing estimators? We can model our posterior distribution as the proposed correlated student's dist only if we assume our prior as very specific.
Keras classification wrapped models output multilabel-indicator (class probability); will I need to setup the Cross Validation manually?
If we perform 1 run of CV, our code fails (float div by zero) -> due to correlation correction for t-test
If we only have one competitor, our plotting fails axes.flatten[i]
If we mix up a normal Pipeline and a keras model wrapped up into a sklearn Pipeline, we need to perform different versions of CV.
Some ideas...
sklearn.Pipeline
(both handling + model), the competitors pipelines, and scoring; outputs the plots and some extra info. (Assuming that it will run 10 repetitions of 10-fold cross-validation setup.sklearn.Pipeline
is too costly to train, for instancekeras.Model
, what should we do?The text was updated successfully, but these errors were encountered: