Skip to content

Commit 3006f59

Browse files
committed
clean up v1.0.3
1 parent 7ce761d commit 3006f59

File tree

15 files changed

+154
-407
lines changed

15 files changed

+154
-407
lines changed

README.md

Lines changed: 24 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ Some of the most commonly used neural networks in neuroscience research are incl
1212

1313
## Table of contents
1414
- [Acknowledgements](#acknowledgements)
15+
- [Change Logs](#change-logs)
1516
- [Install](#install)
1617
- [Install Using pip](#install-using-pip)
1718
- [Install From GitHub](#install-from-github)
@@ -23,11 +24,14 @@ Some of the most commonly used neural networks in neuroscience research are incl
2324
- [Random Input](#random-input)
2425
- [Criterion](#criterion)
2526
- [RNNLoss](#rnnloss)
26-
- [Change Logs](#change-logs)
2727
- [Others](#others)
2828

2929
## Acknowledgements
30-
Immense thanks to Christopher J. Cueva for his mentorship in developing this project. This project can't be done without his invaluable help.
30+
Immense thanks to [Christopher J. Cueva](https://www.metaconscious.org/author/chris-cueva/) for his mentorship in developing this project. This project can't be done without his invaluable help.
31+
32+
## Change Logs
33+
- [v1.0.3 [stable]](https://github.com/zhaozewang/NN4Neurosci/blob/main/docs/change_logs/v1.0.3.md)
34+
- [v1.1.0 [newest]](https://github.com/zhaozewang/NN4Neurosci/blob/main/docs/change_logs/v1.1.0.md)
3135

3236
## Install
3337
### Install using pip
@@ -36,60 +40,50 @@ pip install nn4n
3640
```
3741

3842
### Install from GitHub
43+
#### Clone the repository
3944
```
4045
git clone https://github.com/zhaozewang/NN4Neurosci.git
4146
```
4247
#### Navigate to the NN4Neurosci directory
4348
```
4449
cd NN4Neurosci/
45-
python setup.py install
46-
```
47-
#### Install
48-
```
49-
cd NN4Neurosci/
50-
pip install .
50+
pip install -e .
5151
```
5252

5353

5454
## Model
5555
### CTRNN
56-
The implementation of standard continuous-time RNN (CTRNN). This implementation supports enforcing sparsity constraint (i.e. preventing new synapses from being created) and E/I constraints (i.e. enforcing Dale's law). </br>
57-
56+
The implementation of standard continuous-time RNN (CTRNN). This implementation supports enforcing sparsity constraints (i.e., preventing new synapses from being created) and E/I constraints (i.e., enforcing Dale's law). This ensures that the gradient descent will update synapses with biological plausible constraints. <br>
5857
- [Documentation](https://github.com/zhaozewang/NN4Neurosci/blob/main/docs/model/CTRNN/index.md)
5958
- [Examples](https://github.com/zhaozewang/NN4Neurosci/blob/main/examples/CTRNN.ipynb)
59+
<p align="center"><img src="https://github.com/zhaozewang/NN4Neurosci/blob/main/docs/images/basics/EIRNN_structure.png" width="400"></p>
60+
6061

6162
## Structure
62-
The detailed structure (e.g. whether its modular or hierarchical etc.) of any standard 3-layer RNN (as shown in figure above) can be specified using masks in our `model` module implementation. Easy implementations of a few RNN structures is included in the `structure` module.
63+
In CTRNN implementation, the hidden layer structure can be easily controlled by specifying sparsity masks and E/I masks. We put all RNN update logic in the `model` module and all structure-related logic in the `structure` module to streamline the implementation process. <br>
64+
We also emphasize on the structure more as it is often more informative to the underlying biological mechanisms. For instance, we might require different module sizes, or we require a multi-module network with E/I constraints; the implementation might be verbose and error-prone. Our following implementation will allow you to easily achieve these goals by simply specifying a few parameters. <br>
6365
- [Documentation](https://github.com/zhaozewang/NN4Neurosci/blob/main/docs/structure/index.md)
6466

65-
### Multi-Area
66-
The HiddenLayer of a RNN is often defined using a connectivity matrix, depicting a somewhat 'random' connectivity between neurons. The connectivity matrix is often designed to imitate the connectivity of a certain brain area or a few brain areas. When modeling a single brain area, the connectivity matrix is often a fully connected matrix. </br>
67-
However, to model multiple brain areas, it would be more reasonable to use a connectivity matrix with multiple areas. In each areas is densely connected within itself and sparsely connected between areas. The `MultiArea` class in the `structure` module is designed to implement such a connectivity matrix. </br>
68-
67+
#### Multi-Area
68+
The following implementation groups neurons into areas depending on your specification. Areas can have different sizes and different connectivities to other areas. <br>
6969
- [Examples](https://github.com/zhaozewang/NN4Neurosci/blob/main/examples/MultiArea.ipynb)
70+
<p align="center"><img src="https://github.com/zhaozewang/NN4Neurosci/blob/main/docs/images/basics/Multi_Area.png" width="400"></p>
7071

71-
### Multi-Area with E/I constraints
72-
On top of modeling brain with multi-area hidden layer, another critical constraint would be the Dale's law, as proposed in the paper [Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework](https://doi.org/10.1371/journal.pcbi.1004792) by Song et al. 2016. The `MultiAreaEI` class in the `structure` module is designed to implement such a connectivity matrix. </br>
73-
This class allows for a much easier implementation of the E/I constraints particularly when there are multiple areas in the hidden layer. It provides flexible control over the excitatory-excitatory, excitatory-inhibitory, inhibitory-excitatory, and inhibitory-inhibitory connections on top of the basic `MultiArea` features. </br>
74-
72+
#### Multi-Area with E/I constraints
73+
On top of modeling brain with multi-area hidden layer, another critical constraint would be Dale's law, as proposed in the paper [Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework](https://doi.org/10.1371/journal.pcbi.1004792) by Song et al. 2016. The `MultiAreaEI` class in the `structure` module is designed to implement such a connectivity matrix. <br>
74+
Additionally, how different E/I regions connect to each other could also be tricky; we parameterized this process such that it can be controlled with only two lines of code. <br>
7575
- [Examples](https://github.com/zhaozewang/NN4Neurosci/blob/main/examples/MultiArea.ipynb)
76+
<p align="center"><img src="https://github.com/zhaozewang/NN4Neurosci/blob/main/docs/images/basics/Multi_Area_EI.png" width="400"></p>
7677

77-
### Random Input
78-
Neurons' dynamic receiving input will be heavily driven by the inputting signal. Injecting signal to only part of the neuron will result in more versatile and hierarchical dynamics. See [A Versatile Hub Model For Efficient Information Propagation And Feature Selection](https://arxiv.org/abs/2307.02398) <br>
79-
78+
#### Random Input
79+
Neurons' dynamic receiving input will be heavily driven by the inputting signal. Injecting signals to only part of the neuron will result in more versatile and hierarchical dynamics. See [A Versatile Hub Model For Efficient Information Propagation And Feature Selection](https://arxiv.org/abs/2307.02398). This is supported by `RandomInput` class <br>
8080
- Example to be added
8181

8282
## Criterion
83-
### RNNLoss
84-
The loss function is modularized. The `RNNLoss` class is designed in modular fashion and included the most commonly used loss functions in neuroscience research. </br>
85-
83+
#### RNNLoss
84+
The loss function is modularized. The `RNNLoss` class is designed in a modular fashion and includes the most commonly used loss functions, such as Frobenius norm on connectivity, metabolic cost, reconstruction loss, etc. <br>
8685
- [Documentation](https://github.com/zhaozewang/NN4Neurosci/blob/main/docs/criterion/index.md)
8786

88-
89-
## Change Logs
90-
- [0.1.3](https://github.com/zhaozewang/NN4Neurosci/blob/main/docs/change_logs/v1.0.3.md)
91-
92-
9387
## Others
9488
For similar projects:
9589
- [nn-brain](https://github.com/gyyang/nn-brain)

docs/change_logs/v1.0.3.md

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,13 @@
1-
## Change Logs
2-
- Removed `ei_balance`. It will automatically balanced such that all weights add up to 1.
3-
- Added Multi Layer Perceptron (MLP) model.
4-
- Removed `allow_negative`
5-
- Renamed `use_dale` as `positivity_constraint`, support list definition.
6-
- Renamed `new_synapses` as `sparsity_constraint`.
7-
- Remove auto E/I balance. Potentially move it to a separate function.
1+
## Verson 1.0.3 Change Logs
2+
1. Removed `ei_balance`. It will automatically balanced such that all weights add up to 1.
3+
2. Added Multi Layer Perceptron (MLP) model.
4+
3. Removed `allow_negative`
5+
4. Renamed `use_dale` as `positivity_constraints`, support list definition.
6+
5. Renamed `new_synapses` as `sparsity_constraints`.
7+
6. Remove auto E/I balance. Potentially move it to a separate function.
8+
7. Removed `spectral_radius`, this will be redesigned in future versions.
89

910

10-
## TODO
11-
- [x] Change `use_dale` to `positivity_constraint`
12-
- [x] Change `new_synapses` to `sparsity_constraint`
11+
## TODOs
12+
- [x] Change `use_dale` to `positivity_constraints`
13+
- [x] Change `new_synapses` to `sparsity_constraints`

docs/change_logs/v1.1.0.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
## Verson 1.1.0 Change Logs
2+
3+
## TODOs
4+
- [ ] Put initialization to structures, i.e., the distribution of weights, biases, etc.
5+
- [ ] Add custom controls to the update speeds of different parts of the network.

docs/model/CTRNN/index.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ fr^{t+1} = f((1-\alpha) v^t + \alpha( W_{hid}^T f(v^t) + W_{in}^T u^t + b_{hid}
7070
## Excitatory-Inhibitory constrained continuous-time RNN
7171
The implementation of CTRNN also supports Excitatory-Inhibitory constrained continuous-time RNN (EIRNN). EIRNN is proposed by H. Francis Song, Guangyu R. Yang, and Xiao-Jing Wang in [Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework](https://doi.org/10.1371/journal.pcbi.1004792)
7272

73-
The original [code](https://github.com/frsong/pycog) is implemented in [Theano](https://pypi.org/project/Theano/) and may be deprecated due to the unsupported Python version. Theano is no longer maintained after Jul 2020. In this repo, the PyTorch version of EIRNN is implemented. It is implicitly included in the CTRNN class and can be enabled by setting `positivity_constraint` to `True` and use appropriate masks.
73+
The original [code](https://github.com/frsong/pycog) is implemented in [Theano](https://pypi.org/project/Theano/) and may be deprecated due to the unsupported Python version. Theano is no longer maintained after Jul 2020. In this repo, the PyTorch version of EIRNN is implemented. It is implicitly included in the CTRNN class and can be enabled by setting `positivity_constraints` to `True` and use appropriate masks.
7474

7575
A visual illustration of the EIRNN is shown below.
7676

@@ -119,9 +119,9 @@ These parameters primarily determine the training process of the network. The `t
119119
These parameters primarily determine the constraints of the network. By default, the network is initialized using the most lenient constraints, i.e., no constraints being enforced.
120120
| Parameter | Default | Type | Description |
121121
|:-------------------------|:-------------:|:--------------------------:|:-------------------------------------------|
122-
| sign | False | `boolean`/`list` | Whether to enforce Dale's law. Either a `boolean` or a `list` of three `boolean`s. If the given value is a list, from the first element to the last element, corresponds to the InputLayer, HiddenLayer, and ReadoutLayer, respectively. |
123-
| sparsity_constraint | True | `boolean`/`list` | Whether a neuron can grow new connections. See [constraints and masks](#constraints-and-masks). If it's a list, it must have precisely three elements. Note: this must be checked even if your mask is sparse, otherwise the new connection will still be generated |
124-
| layer_masks | `None` or `list` | `list` of `np.ndarray` | Layer masks if `sparsity_constraint/positivity_constraint is set to true. From the first to the last, the list elements correspond to the mask for Input-Hidden, Hidden-Hidden, and Hidden-Readout weights, respectively. Each mask must have the same dimension as the corresponding weight matrix. See [constraints and masks](#constraints-and-masks) for details. |
122+
| positivity_constraints | False | `boolean`/`list` | Whether to enforce Dale's law. Either a `boolean` or a `list` of three `boolean`s. If the given value is a list, from the first element to the last element, corresponds to the InputLayer, HiddenLayer, and ReadoutLayer, respectively. |
123+
| sparsity_constraints | True | `boolean`/`list` | Whether a neuron can grow new connections. See [constraints and masks](#constraints-and-masks). If it's a list, it must have precisely three elements. Note: this must be checked even if your mask is sparse, otherwise the new connection will still be generated |
124+
| layer_masks | `None` or `list` | `list` of `np.ndarray` | Layer masks if `sparsity_constraints/positivity_constraints is set to true. From the first to the last, the list elements correspond to the mask for Input-Hidden, Hidden-Hidden, and Hidden-Readout weights, respectively. Each mask must have the same dimension as the corresponding weight matrix. See [constraints and masks](#constraints-and-masks) for details. |
125125

126126

127127
## Parameter Specifications
@@ -147,15 +147,15 @@ then add noise for $t+2$
147147
### Constraints and masks
148148
Constraints are enforced before each forward pass
149149
#### Dale's law:
150-
Masks (input, hidden, and output) cannot be `None` if `positivity_constraint` is `True`.<br>
150+
Masks (input, hidden, and output) cannot be `None` if `positivity_constraints` is `True`.<br>
151151
Only entry signs matter for the enforcement of Dale's law. All edges from the same neuron must be all excitatory or all inhibitory. This is enforced across training using the `relu()` and `-relu()` functions.<br>
152-
When `positivity_constraint` is set to true, it will automatically balance the excitatory/inhibitory such that all synaptic strengths add up to zero.
152+
When `positivity_constraints` is set to true, it will automatically balance the excitatory/inhibitory such that all synaptic strengths add up to zero.
153153
#### New synapse:
154-
`sparsity_constraint` defines whether a neuron can 'grow' new connections.<br>
155-
If plasticity is set to False, neurons cannot 'grow' new connections. A mask must be provided if `sparsity_constraint` is set to False.<br>
154+
`sparsity_constraints` defines whether a neuron can 'grow' new connections.<br>
155+
If plasticity is set to False, neurons cannot 'grow' new connections. A mask must be provided if `sparsity_constraints` is set to False.<br>
156156
Only zeros entries matter. All entries that correspond to a zero value in the mask will remain zero across all time.
157157
#### Self connections:
158-
Whether a neuron can connect to itself. This feature is enforced along with the `sparsity_constraint` mask. If mask is not specified but `self_connections` is set, a mask that only has zero entires on the diagonal will be generated automatically.
158+
Whether a neuron can connect to itself. This feature is enforced along with the `sparsity_constraints` mask. If mask is not specified but `self_connections` is set, a mask that only has zero entires on the diagonal will be generated automatically.
159159

160160
## Methods
161161
| Method | Description |
@@ -171,8 +171,8 @@ Whether a neuron can connect to itself. This feature is enforced along with the
171171
- [x] Test different activation functions
172172
- [x] Bias when using Dale's law?
173173
- [ ] If the masks are not set, there need default values.
174-
- [x] Potentially user can choose to enforce `sparsity_constraint` or not for a specific layer
175-
- [x] Re-write Dale's law such that it can still work when `sparsity_constraint` is not enforced.
174+
- [x] Potentially user can choose to enforce `sparsity_constraints` or not for a specific layer
175+
- [x] Re-write Dale's law such that it can still work when `sparsity_constraints` is not enforced.
176176
- [x] Can InputLayer and ReadoutLayer weights be negative when Dale's law is enforced?
177177
- [x] Check if bias does not change when use_bias = False
178178
- [x] Merge hidden_bias, input_bias, readout_bias to a single parameter

docs/structure/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ Methods that are shared by all structures. <br>
4848

4949
### MultiArea
5050
See [Examples](https://github.com/zhaozewang/NN4Neurosci/blob/main/examples/MultiArea.ipynb) <br>
51-
This will generate a multi-area RNN without E/I constraints. Therefore, by default, the input/hidden/readout masks are binary masks. Use cautious when the `positivity_constraint` parameter of CTRNN is set to `True`, because it will make all neurons to be excitatory.
51+
This will generate a multi-area RNN without E/I constraints. Therefore, by default, the input/hidden/readout masks are binary masks. Use cautious when the `positivity_constraints` parameter of CTRNN is set to `True`, because it will make all neurons to be excitatory.
5252
**NOTE:** This also implicitly covers single area case. If `n_area` is set to 1. All other parameters that conflict this setting will be ignored.
5353
#### MultiArea Parameters
5454
| Parameter | Default | Required | Type | Description |

examples/CTRNN.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -497,13 +497,13 @@
497497
" \n",
498498
" # hyperparameters\n",
499499
" \"tau\": 50,\n",
500-
" \"positivity_constraint\": True,\n",
500+
" \"positivity_constraintsssss\": True,\n",
501501
" \"scaling\": 1.0,\n",
502502
" \"dt\": 1,\n",
503503
" \"activation\": \"relu\",\n",
504504
" \"preact_noise\": 0.0,\n",
505505
" \"postact_noise\": 0.0,\n",
506-
" \"sparsity_constraint\": False,\n",
506+
" \"sparsity_constraints\": False,\n",
507507
" \"self_connections\": False,\n",
508508
"\n",
509509
" # bias and distribution\n",

0 commit comments

Comments
 (0)