PosePipe is a human pose estimation (HPE) pipeline designed to facilitate movement analysis from videos.
It uses DataJoint to manage relationships between algorithms, videos, and intermediate outputs.
Key features:
- Modular wrappers for numerous state-of-the-art HPE algorithms
- Structured video and data management via DataJoint
- Output visualizations to easily compare and analyze results
- Designed for clinical research movement analysis pipelines
- Install PosePipe
pip install pose_pipeline
Detailed installation instructions are provided to launch a DataJoint MySQL database and install OpenMMLab packages.
- Test the pipeline
Use the Getting Started Notebook to start running your videos through the pose estimation framework.
- Upgraded mmcv to v2.x
- Tracking Algorithms (from mmdetection):
- Top Down 2D Body Keypoint Detection Algorithms (from mmpose):
- Top Down 2D Hand Keypoint Detection Algorithms (from mmpose):
- Bottom Up Algorithms:
VSCode is recommended for development.
Include the following in your .vscode/settings.json
to enable consistent black
formatting:
{
"python.formatting.blackArgs": [
"--line-length=120",
"--include='*py'",
"--exclude='*ipynb'",
"--extend-exclude='.env'",
"--extend-exclude='3rdparty/*'"
],
"editor.rulers": [120]
}
- License: GPL-3.0
- Source Code: GitHub Repo
- PyPI: https://pypi.org/project/posepipe
- Issues/Contributions: Please use Issues for bug reports and feature requests
If you use this tool for research, please cite:
@misc{posepipe2024,
author = {R James Cotton},
title = {PosePipe: Open-Source Human Pose Estimation Pipeline for Clinical Research},
year = {2024},
howpublished = {\url{https://github.com/IntelligentSensingAndRehabilitation/PosePipeline}}
}