You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: snntorch/_neurons/leakyparallel.py
+36-19Lines changed: 36 additions & 19 deletions
Original file line number
Diff line number
Diff line change
@@ -24,13 +24,24 @@ class LeakyParallel(nn.Module):
24
24
25
25
Several differences between `LeakyParallel` and `Leaky` include:
26
26
27
-
* Negative hidden states are clipped due to the forced ReLU operation in RNN
28
-
* Linear weights are included in addition to recurrent weights
29
-
* `beta` is clipped between [0,1] and cloned to `weight_hh_l` only upon layer initialization. It is unused otherwise
30
-
* There is no explicit reset mechanism
31
-
* Several functions such as `init_hidden`, `output`, `inhibition`, and `state_quant` are unavailable in `LeakyParallel`
32
-
* Only the output spike is returned. Membrane potential is not accessible by default
33
-
* RNN uses a hidden matrix of size (num_hidden, num_hidden) to transform the hidden state vector. This would 'leak' the membrane potential between LIF neurons, and so the hidden matrix is forced to a diagonal matrix by default. This can be disabled by setting `weight_hh_enable=True`.
27
+
* Negative hidden states are clipped due to the
28
+
forced ReLU operation in RNN.
29
+
* Linear weights are included in addition to
30
+
recurrent weights.
31
+
* `beta` is clipped between [0,1] and cloned to
32
+
`weight_hh_l` only upon layer initialization.
33
+
It is unused otherwise.
34
+
* There is no explicit reset mechanism.
35
+
* Several functions such as `init_hidden`, `output`,
36
+
`inhibition`, and `state_quant` are unavailable
37
+
in `LeakyParallel`.
38
+
* Only the output spike is returned. Membrane potential
39
+
is not accessible by default.
40
+
* RNN uses a hidden matrix of size (num_hidden, num_hidden)
41
+
to transform the hidden state vector. This would 'leak'
42
+
the membrane potential between LIF neurons, and so the
43
+
hidden matrix is forced to a diagonal matrix by default.
44
+
This can be disabled by setting `weight_hh_enable=True`.
34
45
35
46
Example::
36
47
@@ -117,22 +128,28 @@ def forward(self, x):
117
128
118
129
where:
119
130
120
-
`L = sequence length`
131
+
* **`L** = sequence length`
121
132
122
-
`N = batch size`
133
+
* **`N** = batch size`
123
134
124
-
`H_{in} = input_size`
135
+
* **`H_{in}** = input_size`
125
136
126
-
`H_{out} = hidden_size`
137
+
* **`H_{out}** = hidden_size`
127
138
128
139
Learnable Parameters:
129
-
- **rnn.weight_ih_l** (torch.Tensor) - the learnable input-hidden weights of shape (hidden_size, input_size)
130
-
- **rnn.weight_hh_l** (torch.Tensor) - the learnable hidden-hidden weights of the k-th layer which are sampled from `beta` of shape (hidden_size, hidden_size)
131
-
- **bias_ih_l** - the learnable input-hidden bias of the k-th layer, of shape (hidden_size)
132
-
- **bias_hh_l** - the learnable hidden-hidden bias of the k-th layer, of shape (hidden_size)
0 commit comments