Skip to content

This study aims to show that group equivariant CNNs outperform spatial transformers, on tasks which demand rotation invariance, by providing theoretical background and experimental performance comparison with detailed analysis.

Notifications You must be signed in to change notification settings

gboduljak/group-equivariant-cnns-vs-spatial-transformers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Group Equivariant CNNs Outperform Spatial Transformers on Tasks Which Require Rotation Invariance

This study aims to show that group equivariant CNNs outperform spatial transformers, on tasks which demand rotation invariance, by providing theoretical background and experimental performance comparison with detailed analysis. The study is in report.pdf.

Models implementations

The folder models contains implementations of group equivariant neural networks and spatial transformers. All layers are implemented from scratch and are located in layers subfolder. Similarly, implementations of interpolation-based lifting convolution kernels and group convolution kernels are in kernels subfolder. The implementation of localization network is in localization_net.py. The discretized implementation of SO2 is in folder groups, in discrete_so2.py.

Group equivariant model is implemented in group_equivariant_cnn.py, while spatial transformer model is implemented in spatial_transformer.py.

Visualisations

Are implemented in visualizations folder.

Reproducibility

In the results folder, there are two folders - no-rotations and rotations. Each of those folders contains weights and training logs, for each of training configurations. Weights and training logs are grouped by model configurations and training configurations. By executing notebooks in the root folder, it is possible to reproduce all tables, visualizations and plots which were present in the submitted report. Training configuration is implemented in MNISTModule, which is located in modules folder, in MNISTModule.py.

References

The implementation in models folder is based on the following resources:

About

This study aims to show that group equivariant CNNs outperform spatial transformers, on tasks which demand rotation invariance, by providing theoretical background and experimental performance comparison with detailed analysis.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published