-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature request] Add new evaluate_model function which can return a more generalized metric #6
Comments
What is the problem with evaluating the model by averaging batches? Sure the results might be slightly different due to the floating point error, but isn't negligible? Furthermore, Keras |
Not all metrics can be evaluated properly in batches and averages. Area under the reciever operating curve (auc roc) is a popular metric which you have to compute over the whole validation set and not average over batches. Average over batches will be wildly inaccurate or undefined. For example, say your validation set has 2 classes which are imbalanced, a common setup, some batches may not have both classes so computing the auc is undefined. |
Oh, I see that’s a good point! In that case I think it would be better to introduce a new metric (roc auc) and then refactor all available metrics (accuracy, loss, roc auc) to a separate class However, this means that we should also refactor I'll add this request for new metrics to the roadmap, however, feel free to create your own PR. |
In evaluate_model, the code below can be used to return metrics which can only be computed on all of the data as opposed to averaged by batches as currently done. For simplicity, you can set numThreads and qSize to 1.
evaluate_model(self, model) becomes:
y_true, y_pred = evaluate(model, data -- will have to convert from generator (easy), 1, 1)
loss = lossFunction(y_true, y_pred)
accuracy can be computed from sklearn.accuracy_score
You could also support losses like AUC now.
The text was updated successfully, but these errors were encountered: