You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to implement more functionalities for evaluating pose estimation model performance and for detecting outliers - i.e. erroneous/implausible pose predictions.
LightningPose currently uses a set of 3 spatiotemporal constraints as unsupervised losses during model training - i.e. the network is penalised if it violates these constraints. These 3 constraints are:
temporal smoothness: there shouldn't be big "jumps" between frame $t-1$ and $t$ ("big" meaning above a certain threshold)
pose plausibility: keypoints in a frame cannot be in weird implausible configurations. LigthningPose derives "plausibility" using PCA - i.e. a pose is flagged as implausible if it lies outside a certain low-dimensional subspace.
multi-view consistency: if we have two views of the same animal (e.g. two cameras, or 1 camera + mirror), the two views of the same keypoint must be consistently "projectable" to some 3D subspace (via PCA), because the "real" position of that keypoint must be a single point in 3D.
Much more details about those can be found in the paper. The implementation of these losses is in this module as far as I can tell.
What we want
We'd like to use these same constraints as post hoc outlier detection heuristics within movement, in addition to the confidence threshold approach we already use in filtering.py. In fact, most of these heuristics have been already used by others as outlier detection approaches (see references in the LightningPose paper), before being implemented as network losses by LightningPose
For evaluation part, we are more interested in performing quality control on the predictions outputted by single framework. If the prediction is unsatisfactory, the quality of the training data may need to improve (i.e., clean data, relabel frames) or more labeled frames are needed. Therefore, ability to detect outliers or locate the section of video with poor prediction would be great to have. The evaluation metrics introduced by Lightning-pose, like pixel error, temporal_norm and pca_singleview_error, may help to automatically filter out frames with poor predictions.
The content you are editing has changed. Please copy your edits and refresh the page.
Currently we see this mostly for outliers detection, but we may want to additionally use this as a quality metric for predicted poses in the absence of ground truth.
Context
We want to implement more functionalities for evaluating pose estimation model performance and for detecting outliers - i.e. erroneous/implausible pose predictions.
LightningPose currently uses a set of 3 spatiotemporal constraints as unsupervised losses during model training - i.e. the network is penalised if it violates these constraints. These 3 constraints are:
Much more details about those can be found in the paper. The implementation of these losses is in this module as far as I can tell.
What we want
We'd like to use these same constraints as post hoc outlier detection heuristics within movement, in addition to the confidence threshold approach we already use in
filtering.py
. In fact, most of these heuristics have been already used by others as outlier detection approaches (see references in the LightningPose paper), before being implemented as network losses by LightningPoseDiscussions
This idea came out of a chat me, @sfmig and @lochhh had with @themattinthehatt (LigthningPose co-author and dev). We had initially documented this as a topic on Zulip.
Folks from the Allen Institute for Neural Dynamics are also interested in this, as mentioned by @Di-Wang-AIND on Zulip:
Tasks
The text was updated successfully, but these errors were encountered: