You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+24-30Lines changed: 24 additions & 30 deletions
Original file line number
Diff line number
Diff line change
@@ -12,6 +12,7 @@ Some of the most commonly used neural networks in neuroscience research are incl
12
12
13
13
## Table of contents
14
14
-[Acknowledgements](#acknowledgements)
15
+
-[Change Logs](#change-logs)
15
16
-[Install](#install)
16
17
-[Install Using pip](#install-using-pip)
17
18
-[Install From GitHub](#install-from-github)
@@ -23,11 +24,14 @@ Some of the most commonly used neural networks in neuroscience research are incl
23
24
-[Random Input](#random-input)
24
25
-[Criterion](#criterion)
25
26
-[RNNLoss](#rnnloss)
26
-
-[Change Logs](#change-logs)
27
27
-[Others](#others)
28
28
29
29
## Acknowledgements
30
-
Immense thanks to Christopher J. Cueva for his mentorship in developing this project. This project can't be done without his invaluable help.
30
+
Immense thanks to [Christopher J. Cueva](https://www.metaconscious.org/author/chris-cueva/) for his mentorship in developing this project. This project can't be done without his invaluable help.
The implementation of standard continuous-time RNN (CTRNN). This implementation supports enforcing sparsity constraint (i.e. preventing new synapses from being created) and E/I constraints (i.e. enforcing Dale's law). </br>
57
-
56
+
The implementation of standard continuous-time RNN (CTRNN). This implementation supports enforcing sparsity constraints (i.e., preventing new synapses from being created) and E/I constraints (i.e., enforcing Dale's law). This ensures that the gradient descent will update synapses with biological plausible constraints. <br>
The detailed structure (e.g. whether its modular or hierarchical etc.) of any standard 3-layer RNN (as shown in figure above) can be specified using masks in our `model` module implementation. Easy implementations of a few RNN structures is included in the `structure` module.
63
+
In CTRNN implementation, the hidden layer structure can be easily controlled by specifying sparsity masks and E/I masks. We put all RNN update logic in the `model` module and all structure-related logic in the `structure` module to streamline the implementation process. <br>
64
+
We also emphasize on the structure more as it is often more informative to the underlying biological mechanisms. For instance, we might require different module sizes, or we require a multi-module network with E/I constraints; the implementation might be verbose and error-prone. Our following implementation will allow you to easily achieve these goals by simply specifying a few parameters. <br>
The HiddenLayer of a RNN is often defined using a connectivity matrix, depicting a somewhat 'random' connectivity between neurons. The connectivity matrix is often designed to imitate the connectivity of a certain brain area or a few brain areas. When modeling a single brain area, the connectivity matrix is often a fully connected matrix. </br>
67
-
However, to model multiple brain areas, it would be more reasonable to use a connectivity matrix with multiple areas. In each areas is densely connected within itself and sparsely connected between areas. The `MultiArea` class in the `structure` module is designed to implement such a connectivity matrix. </br>
68
-
67
+
#### Multi-Area
68
+
The following implementation groups neurons into areas depending on your specification. Areas can have different sizes and different connectivities to other areas. <br>
On top of modeling brain with multi-area hidden layer, another critical constraint would be the Dale's law, as proposed in the paper [Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework](https://doi.org/10.1371/journal.pcbi.1004792) by Song et al. 2016. The `MultiAreaEI` class in the `structure` module is designed to implement such a connectivity matrix. </br>
73
-
This class allows for a much easier implementation of the E/I constraints particularly when there are multiple areas in the hidden layer. It provides flexible control over the excitatory-excitatory, excitatory-inhibitory, inhibitory-excitatory, and inhibitory-inhibitory connections on top of the basic `MultiArea` features. </br>
74
-
72
+
#### Multi-Area with E/I constraints
73
+
On top of modeling brain with multi-area hidden layer, another critical constraint would be Dale's law, as proposed in the paper [Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework](https://doi.org/10.1371/journal.pcbi.1004792) by Song et al. 2016. The `MultiAreaEI` class in the `structure` module is designed to implement such a connectivity matrix. <br>
74
+
Additionally, how different E/I regions connect to each other could also be tricky; we parameterized this process such that it can be controlled with only two lines of code. <br>
Neurons' dynamic receiving input will be heavily driven by the inputting signal. Injecting signal to only part of the neuron will result in more versatile and hierarchical dynamics. See [A Versatile Hub Model For Efficient Information Propagation And Feature Selection](https://arxiv.org/abs/2307.02398) <br>
79
-
78
+
#### Random Input
79
+
Neurons' dynamic receiving input will be heavily driven by the inputting signal. Injecting signals to only part of the neuron will result in more versatile and hierarchical dynamics. See [A Versatile Hub Model For Efficient Information Propagation And Feature Selection](https://arxiv.org/abs/2307.02398). This is supported by `RandomInput` class <br>
80
80
- Example to be added
81
81
82
82
## Criterion
83
-
### RNNLoss
84
-
The loss function is modularized. The `RNNLoss` class is designed in modular fashion and included the most commonly used loss functions in neuroscience research. </br>
85
-
83
+
#### RNNLoss
84
+
The loss function is modularized. The `RNNLoss` class is designed in a modular fashion and includes the most commonly used loss functions, such as Frobenius norm on connectivity, metabolic cost, reconstruction loss, etc. <br>
The implementation of CTRNN also supports Excitatory-Inhibitory constrained continuous-time RNN (EIRNN). EIRNN is proposed by H. Francis Song, Guangyu R. Yang, and Xiao-Jing Wang in [Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework](https://doi.org/10.1371/journal.pcbi.1004792)
72
72
73
-
The original [code](https://github.com/frsong/pycog) is implemented in [Theano](https://pypi.org/project/Theano/) and may be deprecated due to the unsupported Python version. Theano is no longer maintained after Jul 2020. In this repo, the PyTorch version of EIRNN is implemented. It is implicitly included in the CTRNN class and can be enabled by setting `positivity_constraint` to `True` and use appropriate masks.
73
+
The original [code](https://github.com/frsong/pycog) is implemented in [Theano](https://pypi.org/project/Theano/) and may be deprecated due to the unsupported Python version. Theano is no longer maintained after Jul 2020. In this repo, the PyTorch version of EIRNN is implemented. It is implicitly included in the CTRNN class and can be enabled by setting `positivity_constraints` to `True` and use appropriate masks.
74
74
75
75
A visual illustration of the EIRNN is shown below.
76
76
@@ -119,9 +119,9 @@ These parameters primarily determine the training process of the network. The `t
119
119
These parameters primarily determine the constraints of the network. By default, the network is initialized using the most lenient constraints, i.e., no constraints being enforced.
|sign | False |`boolean`/`list`| Whether to enforce Dale's law. Either a `boolean` or a `list` of three `boolean`s. If the given value is a list, from the first element to the last element, corresponds to the InputLayer, HiddenLayer, and ReadoutLayer, respectively. |
123
-
|sparsity_constraint| True |`boolean`/`list`| Whether a neuron can grow new connections. See [constraints and masks](#constraints-and-masks). If it's a list, it must have precisely three elements. Note: this must be checked even if your mask is sparse, otherwise the new connection will still be generated |
124
-
| layer_masks |`None` or `list`|`list` of `np.ndarray`| Layer masks if `sparsity_constraint/positivity_constraint is set to true. From the first to the last, the list elements correspond to the mask for Input-Hidden, Hidden-Hidden, and Hidden-Readout weights, respectively. Each mask must have the same dimension as the corresponding weight matrix. See [constraints and masks](#constraints-and-masks) for details. |
122
+
|positivity_constraints| False |`boolean`/`list`| Whether to enforce Dale's law. Either a `boolean` or a `list` of three `boolean`s. If the given value is a list, from the first element to the last element, corresponds to the InputLayer, HiddenLayer, and ReadoutLayer, respectively. |
123
+
|sparsity_constraints| True |`boolean`/`list`| Whether a neuron can grow new connections. See [constraints and masks](#constraints-and-masks). If it's a list, it must have precisely three elements. Note: this must be checked even if your mask is sparse, otherwise the new connection will still be generated |
124
+
| layer_masks |`None` or `list`|`list` of `np.ndarray`| Layer masks if `sparsity_constraints/positivity_constraints is set to true. From the first to the last, the list elements correspond to the mask for Input-Hidden, Hidden-Hidden, and Hidden-Readout weights, respectively. Each mask must have the same dimension as the corresponding weight matrix. See [constraints and masks](#constraints-and-masks) for details. |
125
125
126
126
127
127
## Parameter Specifications
@@ -147,15 +147,15 @@ then add noise for $t+2$
147
147
### Constraints and masks
148
148
Constraints are enforced before each forward pass
149
149
#### Dale's law:
150
-
Masks (input, hidden, and output) cannot be `None` if `positivity_constraint` is `True`.<br>
150
+
Masks (input, hidden, and output) cannot be `None` if `positivity_constraints` is `True`.<br>
151
151
Only entry signs matter for the enforcement of Dale's law. All edges from the same neuron must be all excitatory or all inhibitory. This is enforced across training using the `relu()` and `-relu()` functions.<br>
152
-
When `positivity_constraint` is set to true, it will automatically balance the excitatory/inhibitory such that all synaptic strengths add up to zero.
152
+
When `positivity_constraints` is set to true, it will automatically balance the excitatory/inhibitory such that all synaptic strengths add up to zero.
153
153
#### New synapse:
154
-
`sparsity_constraint` defines whether a neuron can 'grow' new connections.<br>
155
-
If plasticity is set to False, neurons cannot 'grow' new connections. A mask must be provided if `sparsity_constraint` is set to False.<br>
154
+
`sparsity_constraints` defines whether a neuron can 'grow' new connections.<br>
155
+
If plasticity is set to False, neurons cannot 'grow' new connections. A mask must be provided if `sparsity_constraints` is set to False.<br>
156
156
Only zeros entries matter. All entries that correspond to a zero value in the mask will remain zero across all time.
157
157
#### Self connections:
158
-
Whether a neuron can connect to itself. This feature is enforced along with the `sparsity_constraint` mask. If mask is not specified but `self_connections` is set, a mask that only has zero entires on the diagonal will be generated automatically.
158
+
Whether a neuron can connect to itself. This feature is enforced along with the `sparsity_constraints` mask. If mask is not specified but `self_connections` is set, a mask that only has zero entires on the diagonal will be generated automatically.
159
159
160
160
## Methods
161
161
| Method | Description |
@@ -171,8 +171,8 @@ Whether a neuron can connect to itself. This feature is enforced along with the
171
171
-[x] Test different activation functions
172
172
-[x] Bias when using Dale's law?
173
173
-[ ] If the masks are not set, there need default values.
174
-
-[x] Potentially user can choose to enforce `sparsity_constraint` or not for a specific layer
175
-
-[x] Re-write Dale's law such that it can still work when `sparsity_constraint` is not enforced.
174
+
-[x] Potentially user can choose to enforce `sparsity_constraints` or not for a specific layer
175
+
-[x] Re-write Dale's law such that it can still work when `sparsity_constraints` is not enforced.
176
176
-[x] Can InputLayer and ReadoutLayer weights be negative when Dale's law is enforced?
177
177
-[x] Check if bias does not change when use_bias = False
178
178
-[x] Merge hidden_bias, input_bias, readout_bias to a single parameter
Copy file name to clipboardExpand all lines: docs/structure/index.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -48,7 +48,7 @@ Methods that are shared by all structures. <br>
48
48
49
49
### MultiArea
50
50
See [Examples](https://github.com/zhaozewang/NN4Neurosci/blob/main/examples/MultiArea.ipynb) <br>
51
-
This will generate a multi-area RNN without E/I constraints. Therefore, by default, the input/hidden/readout masks are binary masks. Use cautious when the `positivity_constraint` parameter of CTRNN is set to `True`, because it will make all neurons to be excitatory.
51
+
This will generate a multi-area RNN without E/I constraints. Therefore, by default, the input/hidden/readout masks are binary masks. Use cautious when the `positivity_constraints` parameter of CTRNN is set to `True`, because it will make all neurons to be excitatory.
52
52
**NOTE:** This also implicitly covers single area case. If `n_area` is set to 1. All other parameters that conflict this setting will be ignored.
0 commit comments