Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Predictions are inverted (sometimes) #50

Open
henrygouk opened this issue Jan 15, 2019 · 1 comment
Open

Predictions are inverted (sometimes) #50

henrygouk opened this issue Jan 15, 2019 · 1 comment

Comments

@henrygouk
Copy link

Describe the bug
I've noticed that the validation accuracy logged by dl4j is almost an exact inversion of the test set accuray reported by weka for some binary classification datasets I've been using.

To Reproduce
Steps to reproduce the behavior:

  1. Download ARFF file from https://www.openml.org/d/1590
  2. Create a network in the weka explorer with three Dense layers of 100 units each, batch size of 100, everything else left as default default.
  3. See the accuracy reported by dl4j during training
  4. See the mismatch in test accuracy reported by weka

Note that this does not happen with all binary classification datasets I have used.

Expected behavior
Accuracy reported by dl4j should be similar to what is reported by weka.

  • Weka version: 3-9-2-SNAPSHOT
  • wekaDeeplearning4j package version: v1.5.11
  • Operating System: Ubuntu 16.04
@henrygouk
Copy link
Author

I've come up with a temporary workaround: swapping the order that the two classes are defined in the ARFF file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant