You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Readme.md
+40-1Lines changed: 40 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -162,6 +162,38 @@ The dataset you want to fit needs to be defined in `datasets.py` as a torch data
162
162
<br>
163
163
164
164
165
+
166
+
### ↺ Refine fitting
167
+
168
+
If you already have body model parameters (pose, shape, translation and scale) given, but they are not ideal, you can refine them.
169
+
The optimization-specific configuration to refine the parmaeters is set under `refine_bm_fitting` in `config.yaml` with the following variables:
170
+
171
+
-`iterations` - (int) number of iterations
172
+
-`start_lr_decay_iteration` - (int) iteration when to start the learning rate decay calculated as `lr *(iterations-current iteration)/iterations`
173
+
-`body_model` - (string) which BM to use (smpl, smplx,..). See [Notes](##-📝-Notes) for supported models
174
+
-`use_landmarks` - (string / list) which body landmarks to use for fitting. Can be `nul` to not use landmarks, `All` to use all possible landmarks, `{BM}_INDEX_LANDMARKS` defined in landmarks.py, or list of landmark names e.g. `["Lt. 10th Rib", "Lt. Dactylion",..]` defined in landmarks.py
175
+
-`refine_params` - (list of strings) of parameters you want to refine, can contiain: pose, shape, transl, scale
176
+
-`use_losses` - (list) losses to use. The complete list of losses is `["data","smooth","landmark","normal","partial_data"]`. Check [notes on losses](##-📝-Notes).
177
+
-`loss_weight_option` - (string) the strategy for the loss weights, defined in `loss_weights_configs.yaml` under `fit_verts_loss_weight_strategy`
178
+
-`prior_folder` - (string) path to the gmm prior loss .pkl file
179
+
-`num_gaussians` - (float) number of gaussians to use for the prior
180
+
-`lr` - (float) learning rate
181
+
-`normal_threshold_angle` - (float) used if normal loss included in `use_losses`. Penalizes knn points only if angle is lower than this threshold. Otherwise points are ignored
Use the `evaluate_fitting.py` script to evaluate the fitting.
167
199
@@ -275,7 +307,14 @@ If you additionally want to evaluate the per vertex error (pve) after fitting (c
275
307
- `vertices_gt` - (np.ndarray) ground truth vertices of the BM
276
308
- `faces_gt` - (np.ndarray) ground truth faces of the BM
277
309
278
-
We provide the FAUST and CAESAR dataset implementations in`datasets.py`. You can obtain the datasets from [here](https://faust-leaderboard.is.tuebingen.mpg.de/) and [here](https://bodysizeshape.com/page-1855750).
310
+
If you want to refine the parameters that have already been fitted, the dataset needs to additionally return:
311
+
312
+
- `pose` - (torch.tensor) fitted pose parameters dim 1 x 72
313
+
- `shape` - (torch.tensor) fitted shape parameters dim 1 x 10
314
+
- `trans` - (torch.tensor) fitted translation dim 1 x 3
315
+
- `gender` - (str) gender of the body model
316
+
317
+
We provide the FAUST and CAESAR and 4DHumanOutfit dataset implementations in`datasets.py`. You can obtain the datasets from [here](https://faust-leaderboard.is.tuebingen.mpg.de/), [here](https://bodysizeshape.com/page-1855750) and [here](https://kinovis.inria.fr/4dhumanoutfit/).
0 commit comments