Skip to content

Commit 253d39f

Browse files
committed
edit authors and text
1 parent 56074c1 commit 253d39f

File tree

1 file changed

+22
-19
lines changed

1 file changed

+22
-19
lines changed

joss_paper/paper.md

Lines changed: 22 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: 'Plenoptic: synthesis methods for analyzing model representations'
2+
title: 'Plenoptic.py: Synthesizing model-optimized visual stimuli'
33
tags:
44
- Python
55
- PyTorch
@@ -10,7 +10,7 @@ authors:
1010
- name: Kathryn Bonnen
1111
orcid: 0000-0002-9210-8275
1212
affiliation: 1, 2
13-
- name: William Broderick
13+
- name: William F. Broderick
1414
orcid: 0000-0002-8999-9003
1515
affiliation: 1
1616
- name: Lyndon R. Duong
@@ -22,6 +22,12 @@ authors:
2222
- name: Nikhil Parthasarathy
2323
orcid: 0000-0003-2572-6492
2424
affiliation: 1
25+
- name: Xinyuan Zhao
26+
orcid: 0000-0003-2572-6492
27+
affiliation: 1
28+
- name: Thomas E. Yerxa
29+
orcid: 0000-0003-2572-6492
30+
affiliation: 1
2531
- name: Eero P. Simoncelli
2632
orcid: 000-0002-1206-527X
2733
affiliation: 1, 2
@@ -30,32 +36,30 @@ affiliations:
3036
index: 1
3137
- name: Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
3238
index: 2
33-
date: April 2021
39+
date: January 2023
3440
bibliography: references.bib
3541
---
3642

3743
# Summary
3844

39-
40-
``Plenoptic`` builds primarily off of ``PyTorch`` [@paszke_pytorch_2019], a Python machine learning library popular in the research community due to its rapid prototyping capability. With ``Plenoptic``, users can build and train models in ``PyTorch``, then use ``Plenoptic`` synthesis methods to assess their internal representations.
41-
Our library is easily extensible, and allows for great flexibility to those who wish to develop or test their own synthesis methods.
42-
Within the library, we also provide an extensive suite of ``PyTorch``-implemented models and activation functions canonical to computational neuroscience.
43-
44-
Many of the methods in ``Plenoptic`` have been developed and used across several studies; however, analyses in these studies used disparate languages and frameworks, and some have yet to be made publicly available.
45-
Here, we have reimplemented the methods central to each of these studies, and unified them under a single, fully-documented API.
46-
Our library includes several Jupyter notebook tutorials designed to be accessible to researchers in the fields of machine learning, and computational neuroscience, and perceptual science.
47-
``Plenoptic`` provides an exciting avenue for researchers to probe their models to gain a deeper understanding of their internal representations.
48-
49-
# Statement of Need
50-
51-
# Overview
45+
In sensory perception and neuroscience, new computational models are most often tested and compared in terms of their ability to fit existing data sets.
46+
However, experimental data are inherently limited in size, quality, and type, and complex models often saturate their explainable variance.
47+
Moreover, it is often difficult to use models to guide the development of future experiments.
48+
Here, building on ideas for optimal experimental stimulus selection (e.g., QUEST, Watson and Pelli, 1983), we present "Plenoptic", a python software library for generating visual stimuli optimized for testing or comparing models.
49+
Plenoptic provides a unified framework containing four previously-published synthesis methods -- model metamers (Freeman and Simoncelli, 2011), Maximum Differentiation (MAD) competition (Wang and Simoncelli, 2008), eigen-distortions (Berardino et al. 2017), and representational geodesics (Hénaff and Simoncelli, 2015) -- each of which offers visualization of model representations, and generation of images that can be used to experimentally test alignment with the human visual system.
50+
Plenoptic leverages modern machine-learning methods to enable application of these synthesis methods to any computational model that satisfies a small set of common requirements.
51+
The most important of these is that the model must be image-computable, implemented in PyTorch, and end-to-end differentiable.
52+
The package includes examples of several low- and mid-level visual models, as well as a set of perceptual quality metrics.
53+
Plenoptic is open source, tested, documented, and extensible, allowing the broader research community to contribute new examples and methods.
54+
In summary, Plenoptic leverages machine learning tools to tighten the scientific hypothesis-testing loop, facilitating investigation of human visual representations.
5255

5356
# Acknowledgements
5457

55-
KB, WB, LRD, PEF, and NP each contributed equally to this work; and names are listed alphabetically.
56-
EPS was funded by the Howard Hughes Medical Institute. EPS and KB were funded by Simons Institute.
58+
All authors contributed equally to this work; and names are listed alphabetically.
59+
EPS and KB were funded by Simons Institute.
5760

5861
For a quick reference, the following citation commands can be used:
62+
5963
- `@author:2001` -> "Author et al. (2001)"
6064
- `[@author:2001]` -> "(Author et al., 2001)"
6165
- `[@author1:2001; @author2:2001]` -> "(Author1 et al., 2001; Author2 et al., 2002)"
@@ -69,4 +73,3 @@ For a quick reference, the following citation commands can be used:
6973
@wang_maximum_2008
7074
@paszke_pytorch_2019
7175
@portilla_parametric_2000
72-

0 commit comments

Comments
 (0)