You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- name: Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
32
38
index: 2
33
-
date: April 2021
39
+
date: January 2023
34
40
bibliography: references.bib
35
41
---
36
42
37
43
# Summary
38
44
39
-
40
-
``Plenoptic`` builds primarily off of ``PyTorch``[@paszke_pytorch_2019], a Python machine learning library popular in the research community due to its rapid prototyping capability. With ``Plenoptic``, users can build and train models in ``PyTorch``, then use ``Plenoptic`` synthesis methods to assess their internal representations.
41
-
Our library is easily extensible, and allows for great flexibility to those who wish to develop or test their own synthesis methods.
42
-
Within the library, we also provide an extensive suite of ``PyTorch``-implemented models and activation functions canonical to computational neuroscience.
43
-
44
-
Many of the methods in ``Plenoptic`` have been developed and used across several studies; however, analyses in these studies used disparate languages and frameworks, and some have yet to be made publicly available.
45
-
Here, we have reimplemented the methods central to each of these studies, and unified them under a single, fully-documented API.
46
-
Our library includes several Jupyter notebook tutorials designed to be accessible to researchers in the fields of machine learning, and computational neuroscience, and perceptual science.
47
-
``Plenoptic`` provides an exciting avenue for researchers to probe their models to gain a deeper understanding of their internal representations.
48
-
49
-
# Statement of Need
50
-
51
-
# Overview
45
+
In sensory perception and neuroscience, new computational models are most often tested and compared in terms of their ability to fit existing data sets.
46
+
However, experimental data are inherently limited in size, quality, and type, and complex models often saturate their explainable variance.
47
+
Moreover, it is often difficult to use models to guide the development of future experiments.
48
+
Here, building on ideas for optimal experimental stimulus selection (e.g., QUEST, Watson and Pelli, 1983), we present "Plenoptic", a python software library for generating visual stimuli optimized for testing or comparing models.
49
+
Plenoptic provides a unified framework containing four previously-published synthesis methods -- model metamers (Freeman and Simoncelli, 2011), Maximum Differentiation (MAD) competition (Wang and Simoncelli, 2008), eigen-distortions (Berardino et al. 2017), and representational geodesics (Hénaff and Simoncelli, 2015) -- each of which offers visualization of model representations, and generation of images that can be used to experimentally test alignment with the human visual system.
50
+
Plenoptic leverages modern machine-learning methods to enable application of these synthesis methods to any computational model that satisfies a small set of common requirements.
51
+
The most important of these is that the model must be image-computable, implemented in PyTorch, and end-to-end differentiable.
52
+
The package includes examples of several low- and mid-level visual models, as well as a set of perceptual quality metrics.
53
+
Plenoptic is open source, tested, documented, and extensible, allowing the broader research community to contribute new examples and methods.
54
+
In summary, Plenoptic leverages machine learning tools to tighten the scientific hypothesis-testing loop, facilitating investigation of human visual representations.
52
55
53
56
# Acknowledgements
54
57
55
-
KB, WB, LRD, PEF, and NP each contributed equally to this work; and names are listed alphabetically.
56
-
EPS was funded by the Howard Hughes Medical Institute. EPS and KB were funded by Simons Institute.
58
+
All authors contributed equally to this work; and names are listed alphabetically.
59
+
EPS and KB were funded by Simons Institute.
57
60
58
61
For a quick reference, the following citation commands can be used:
62
+
59
63
-`@author:2001` -> "Author et al. (2001)"
60
64
-`[@author:2001]` -> "(Author et al., 2001)"
61
65
-`[@author1:2001; @author2:2001]` -> "(Author1 et al., 2001; Author2 et al., 2002)"
@@ -69,4 +73,3 @@ For a quick reference, the following citation commands can be used:
0 commit comments