Skip to content

This work is a small test to demonstrate the F-test and described in the MLxtend paper. It uses the Iris dataset. To conduct the test, we compare the accuracy of 5 different models, and then test the null hypothesis

Notifications You must be signed in to change notification settings

TomasCostaK/ML_Model_Comparison

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 

Repository files navigation

The F-test for Comparing Multiple Classifiers

The method of using the F-test for comparing two classifiers is somewhat loosely based on Looney's work.

Thus method can be used to compare two or more classifiers. And, in the context of the F-test, our null hypothesis is that there that there is no difference between the classification accuracies

A null hypothesis is a type of hypothesis used in statistics that proposes that there is no difference between certain characteristics of a population. In this case, we are trying to prove that there is no difference between the classification accuracies of the multiple classifiers, therefore, proving the null hypothesis.

The work

This work is a small test to demonstrate the F-test and described in the MLxtend paper. It uses the Iris dataset. To conduct the test, we compare the accuracy of 5 different models, and then test the null hypothesis:

  • SVM Linear
  • SVM RBF
  • Linear Discriminant
  • KNN
  • Perceptron

Authors

Tomás Costa - GitHub
João Silva - GitHub

About

This work is a small test to demonstrate the F-test and described in the MLxtend paper. It uses the Iris dataset. To conduct the test, we compare the accuracy of 5 different models, and then test the null hypothesis

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published