Skip to content

An extensive evaluation and comparison of 28 state-of-the-art superpixel algorithms on 5 datasets.

Notifications You must be signed in to change notification settings

davidstutz/superpixel-benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Superpixels: An Evaluation of the State-of-the-Art

Build Status

This repository contains the source code used for evaluation in [1], a large-scale comparison of state-of-the-art superpixel algorithms.

ArXiv | Project Page | Datasets | Doxygen Documentation

This repository subsumes earlier work on comparing superpixel algorithms: davidstutz/gcpr2015-superpixels, davidstutz/superpixels-revisited.

Please cite the following work if you use this benchmark or the provided tools or implementations:

[1] D. Stutz, A. Hermans, B. Leibe.
    Superpixels: An Evaluation of the State-of-the-Art.
    Computer Vision and Image Understanding, 2018.

Also make also sure to cite additional papers when using datasets or superpixel algorithms.

Updates:

  • A docker implementation containing many of the algorithms was added to ./docker.
  • An implementation of the average metrics, i.e. Average Boundary Recall (called Average Miss Rate in the updated paper), Average Undersegmentation Error and Average Explained Variation (called Average Unexplained Variation in the updated paper) is provided in lib_eval/evaluation.h and an easy-to-use command line tool is provided, see eval_average_cli and the corresponding documentation and examples in Executables and Examples respectively.
  • As of Mar 29, 2017 the paper was accepted for publication at CVIU.
  • The converted (i.e. pre-processed) NYUV2, SBD and SUNRGBD datasets are now available in the data repository.
  • The source code of MSS has been added.
  • The source code of PF and SEAW has been added.
  • Doxygen documentation is now available here.
  • The presented paper was in preparation for a longer period of time — some recent superpixel algorithms are not included in the comparison. These include SCSP and LRW.

Table of Contents

Introduction

Superpixels group pixels similar in color and other low-level properties. In this respect, superpixels address two problems inherent to the processing of digital images: firstly, pixels are merely a result of discretization; and secondly, the high number of pixels in large images prevents many algorithms from being computationally feasible. Superpixels were introduced as more natural entities - grouping pixels which perceptually belong together while heavily reducing the number of primitives.

This repository can be understood as supplementary material for an extensive evaluation of 28 algorithms on 5 datasets regarding visual quality, performance, runtime, implementation details and robustness - as presented in [1]. To ensure a fair comparison, parameters have been optimized on separate training sets; as the number of generated superpixels heavily influences parameter optimization, we additionally enforced connectivity. Furthermore, to evaluate superpixel algorithms independent of the number of superpixels, we propose to integrate over commonly used metrics such as Boundary Recall, Undersegmentation Error and Explained Variation. Finally, we present a ranking of the superpixel algorithms considering multiple metrics and independent of the number of generated superpixels, as shown below.

Algorithm ranking.

The table shows the average ranks across the 5 datasets, taking into account Average Boundary Recall (ARec) and Average Undersegmentation Error (AUE) - lower is better in both cases, see Benchmark. The confusion matrix shows the rank distribution of the algorithms across the datasets.

Algorithms

The following algorithms were evaluated in [1], and most of them are included in this repository:

Included Algorithm Reference
☑️ CCS Ref. & Web
Instructions CIS Ref. & Web
☑️ CRS Ref. & Web
☑️ CW Ref. & Web
☑️ DASP Ref. & Web
☑️ EAMS Ref., Ref., Ref. & Web
☑️ ERS Ref. & Web
☑️ FH Ref. & Web
☑️ MSS Ref.
☑️ PB Ref. & Web
☑️ preSLIC Ref. & Web
☑️ reSEEDS Web
☑️ SEAW Ref. & Web
☑️ SEEDS Ref. & Web
☑️ SLIC Ref. & Web
☑️ TP Ref. & Web
☑️ TPS Ref. & Web
☑️ vlSLIC Web
☑️ W Web
☑️ WP Ref. & Web
☑️ PF Ref. & Web
☑️ LSC Ref. & Web
☑️ RW Ref. & Web
☑️ QS Ref. & Web
☑️ NC Ref. & Web
☑️ VCCS Ref. & Web
☑️ POISE Ref. & Web
☑️ VC Ref. & Web
☑️ ETPS Ref. & Web
☑️ ERGC Ref., Ref. & Web

Submission

To keep the benchmark alive, we encourage authors to make their implementations publicly available and integrate them into this benchmark. We are happy to help with the integration and update the results published in [1] and on the project page. Also see the Documentation for details.

License

Licenses for source code corresponding to:

D. Stutz, A. Hermans, B. Leibe. Superpixels: An Evaluation of the State-of-the-Art. Computer Vision and Image Understanding, 2018.

Note that the source code/data is based on other projects for which separate licenses apply, see:

Copyright (c) 2016-2018 David Stutz, RWTH Aachen University

Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use this software and associated documentation files (the "Software").

The authors hereby grant you a non-exclusive, non-transferable, free of charge right to copy, modify, merge, publish, distribute, and sublicense the Software for the sole purpose of performing non-commercial scientific research, non-commercial education, or non-commercial artistic projects.

Any other use, in particular any use for commercial purposes, is prohibited. This includes, without limitation, incorporation in a commercial product, use in a commercial service, or production of other artefacts for commercial purposes.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

You understand and agree that the authors are under no obligation to provide either maintenance services, update services, notices of latent defects, or corrections of defects with regard to the Software. The authors nevertheless reserve the right to update, modify, or discontinue the Software at any time.

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. You agree to cite the corresponding papers (see above) in documents and papers that report on research using the Software.

About

An extensive evaluation and comparison of 28 state-of-the-art superpixel algorithms on 5 datasets.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published