You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,8 @@ Most recent change on the top.
10
10
11
11
12
12
### Added
13
-
- Users can specify having irreps of different multiplicites in `NequIPGNNModel` by providing `num_features` that is a list of `l_max + 1` features. E.g. for `l_max=2` and `parity=False`, `num_features=[5, 2, 7]` refers to `5x0e`, `2x1o` and `7x2e` features (see `configs/tutorial.yaml` for an example)
13
+
- users can specify having irreps of different multiplicites in `NequIPGNNModel` by providing `num_features` that is a list of `l_max + 1` features. E.g. for `l_max=2` and `parity=False`, `num_features=[5, 2, 7]` refers to `5x0e`, `2x1o` and `7x2e` features (see `configs/tutorial.yaml` for an example)
14
+
- users can specify `type_embed_num_features` as a separate hyperparameter to control the number of features in the type embedding layer (defaults to `num_features[0]`)
14
15
- batched AOTI inference
15
16
- per-edge-type cutoff can now lead to cost reduction in the LAMMPS ML-IAP interface
16
17
- optional `--constant-fold` acceleration argument for `nequip-compile --mode aotinductor` that can provide small speed ups for PyTorch >= 2.8 (may fail with some models, please open issues if that such instances are encountered)
Copy file name to clipboardExpand all lines: nequip/model/nequip_models.py
+10-1Lines changed: 10 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -34,6 +34,7 @@ def NequIPGNNModel(
34
34
l_max: int=1,
35
35
parity: bool=True,
36
36
num_features: Union[int, List[int]] =32,
37
+
type_embed_num_features: Optional[int] =None,
37
38
radial_mlp_depth: int=2,
38
39
radial_mlp_width: int=64,
39
40
**kwargs,
@@ -50,6 +51,7 @@ def NequIPGNNModel(
50
51
l_max (int): the maximum rotation order for the network's features, ``1`` is a good default, ``2`` is more accurate but slower (default ``1``)
51
52
parity (bool): whether to include features with odd mirror parity -- often turning parity off gives equally good results but faster networks, so it's worth testing (default ``True``)
52
53
num_features (int/List[int]): multiplicity of the features, smaller is faster (default ``32``); it is also possible to provide the multiplicity for each irrep, e.g. for ``l_max=2`` and ``parity=False``, ``num_features=[5, 2, 7]`` refers to ``5x0e``, ``2x1o`` and ``7x2e`` features
54
+
type_embed_num_features (int): number of features for the type embedding layer; if not provided, defaults to ``num_features[0]`` (default ``None``)
53
55
radial_mlp_depth (int): number of radial layers, usually 1-3 works best, smaller is faster (default ``2``)
54
56
radial_mlp_width (int): number of hidden neurons in radial function, smaller is faster (default ``64``)
55
57
num_bessels (int): number of Bessel basis functions (default ``8``)
@@ -80,6 +82,13 @@ def NequIPGNNModel(
80
82
f"`num_features` should be of length `l_max + 1` ({l_max+1}), but found `num_features={num_features}` with {len(num_features)} entries."
81
83
)
82
84
85
+
# === type embedding ===
86
+
type_embed_num_features= (
87
+
type_embed_num_features
88
+
iftype_embed_num_featuresisnotNone
89
+
elsenum_features[0]
90
+
)
91
+
83
92
# === convnet ===
84
93
# convert a single set of parameters uniformly for every layer
0 commit comments