Skip to content
forked from omimo/PyMO

A library for machine learning research on motion capture data

License

Notifications You must be signed in to change notification settings

simonalexanderson/PyMO

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyMO

A library for using motion capture data for machine learning.

This library is currently highly experimental and everything is subject to change :)

Roadmap

  • Mocap Data Parsers and Writers
  • Common mocap pre-processing algorithms
  • Feature extraction library
  • Visualization tools

Current Features

Dependencies

numpy scipy joblib scikit-learn pandas jupyter matplotlib

Demos

See the demo notebooks for processing and visualization examples:

  • bvh2features: downsampling, root-centric transformation, joint-selection and data-augmentation (mirroring)
  • features2bvh: converting features back to bvh format
  • Position: bvh to joint positions
  • PlayMocap: Visualize in HTML

Read BVH Files

from pymo.parsers import BVHParser
import pymo.viz_tools

parser = BVHParser()

parsed_data = parser.parse('data/AV_8Walk_Meredith_HVHA_Rep1.bvh')

Get Skeleton Info

import pymo.viz_tools

viz_tools.print_skel(parsed_data)

Will print the skeleton hierarchy:

- Hips (None)
| | - RightUpLeg (Hips)
| | - RightLeg (RightUpLeg)
| | - RightFoot (RightLeg)
| | - RightToeBase (RightFoot)
| | - RightToeBase_Nub (RightToeBase)
| - LeftUpLeg (Hips)
| - LeftLeg (LeftUpLeg)
| - LeftFoot (LeftLeg)
| - LeftToeBase (LeftFoot)
| - LeftToeBase_Nub (LeftToeBase)
- Spine (Hips)
| | - RightShoulder (Spine)
| | - RightArm (RightShoulder)
| | - RightForeArm (RightArm)
| | - RightHand (RightForeArm)
| | | - RightHand_End (RightHand)
| | | - RightHand_End_Nub (RightHand_End)
| | - RightHandThumb1 (RightHand)
| | - RightHandThumb1_Nub (RightHandThumb1)
| - LeftShoulder (Spine)
| - LeftArm (LeftShoulder)
| - LeftForeArm (LeftArm)
| - LeftHand (LeftForeArm)
| | - LeftHand_End (LeftHand)
| | - LeftHand_End_Nub (LeftHand_End)
| - LeftHandThumb1 (LeftHand)
| - LeftHandThumb1_Nub (LeftHandThumb1)
- Head (Spine)
- Head_Nub (Head)

scikit-learn Pipeline API

data_pipe = Pipeline([
    ('rcpn', RootCentricPositionNormalizer()),
    ('delta', RootTransformer('abdolute_translation_deltas')),
    ('const', ConstantsRemover()),
    ('np', Numpyfier()),
    ('down', DownSampler(2)),
    ('stdscale', ListStandardScaler())
])

piped_data = data_pipe.fit_transform(train_X)

Convert to Positions

mp = MocapParameterizer('positions')

positions = mp.fit_transform([parsed_data])

Visualize a single 2D Frame

draw_stickfigure(positions[0], frame=10)

2D Skeleton Viz

Animate in 3D (inside a Jupyter Notebook)

nb_play_mocap(positions[0], 'pos', 
              scale=2, camera_z=800, frame_time=1/120, 
              base_url='../pymo/mocapplayer/playBuffer.html')

Mocap Player

Foot/Ground Contact Detector

from pymo.features import *

plot_foot_up_down(positions[0], 'RightFoot_Yposition')

Foot Contact

signal = create_foot_contact_signal(pos_data[3], 'RightFoot_Yposition')
plt.figure(figsize=(12,5))
plt.plot(signal, 'r')
plt.plot(pos_data[3].values['RightFoot_Yposition'].values, 'g')

Foot Contact Signal

Feedback, Bugs, and Questions

For any questions, feedback, and bug reports, please use the Github Issues.

Credits

Created by Omid Alemi

Modified by Simon Alexanderson

License

This code is available under the MIT license.

About

A library for machine learning research on motion capture data

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 52.5%
  • JavaScript 26.7%
  • HTML 19.0%
  • CSS 1.8%