Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement LightningPoses losses for outlier detection #145

Open
3 tasks
niksirbi opened this issue Mar 15, 2024 · 1 comment
Open
3 tasks

Implement LightningPoses losses for outlier detection #145

niksirbi opened this issue Mar 15, 2024 · 1 comment
Labels
enhancement New optional feature

Comments

@niksirbi
Copy link
Member

niksirbi commented Mar 15, 2024

Context

We want to implement more functionalities for evaluating pose estimation model performance and for detecting outliers - i.e. erroneous/implausible pose predictions.

LightningPose currently uses a set of 3 spatiotemporal constraints as unsupervised losses during model training - i.e. the network is penalised if it violates these constraints. These 3 constraints are:

  • temporal smoothness: there shouldn't be big "jumps" between frame $t-1$ and $t$ ("big" meaning above a certain threshold)
  • pose plausibility: keypoints in a frame cannot be in weird implausible configurations. LigthningPose derives "plausibility" using PCA - i.e. a pose is flagged as implausible if it lies outside a certain low-dimensional subspace.
  • multi-view consistency: if we have two views of the same animal (e.g. two cameras, or 1 camera + mirror), the two views of the same keypoint must be consistently "projectable" to some 3D subspace (via PCA), because the "real" position of that keypoint must be a single point in 3D.

Much more details about those can be found in the paper. The implementation of these losses is in this module as far as I can tell.

What we want

We'd like to use these same constraints as post hoc outlier detection heuristics within movement, in addition to the confidence threshold approach we already use in filtering.py. In fact, most of these heuristics have been already used by others as outlier detection approaches (see references in the LightningPose paper), before being implemented as network losses by LightningPose

Discussions

This idea came out of a chat me, @sfmig and @lochhh had with @themattinthehatt (LigthningPose co-author and dev). We had initially documented this as a topic on Zulip.

Folks from the Allen Institute for Neural Dynamics are also interested in this, as mentioned by @Di-Wang-AIND on Zulip:

For evaluation part, we are more interested in performing quality control on the predictions outputted by single framework. If the prediction is unsatisfactory, the quality of the training data may need to improve (i.e., clean data, relabel frames) or more labeled frames are needed. Therefore, ability to detect outliers or locate the section of video with poor prediction would be great to have. The evaluation metrics introduced by Lightning-pose, like pixel error, temporal_norm and pca_singleview_error, may help to automatically filter out frames with poor predictions.

Tasks

  1. enhancement
    lochhh
  2. enhancement
    sfmig
@niksirbi niksirbi added the enhancement New optional feature label Mar 15, 2024
@sfmig
Copy link
Contributor

sfmig commented Jun 17, 2024

Currently we see this mostly for outliers detection, but we may want to additionally use this as a quality metric for predicted poses in the absence of ground truth.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New optional feature
Projects
Status: 📝 Todo
Development

No branches or pull requests

2 participants