Newer
Older
Demo-Maker / modules / rtmpose / configs / body_3d_keypoint / motionbert / README.md

MotionBERT: A Unified Perspective on Learning Human Motion Representations

Motionbert proposes a pretraining stage in which a motion encoder is trained to recover the underlying 3D motion from noisy partial 2D observations. The motion representations acquired in this way incorporate geometric, kinematic, and physical knowledge about human motion, which can be easily transferred to multiple downstream tasks.

Results and Models

Human3.6m Dataset

ArchMPJPEP-MPJPEckptlogDetails and Download
MotionBERT*35.327.7ckpt/motionbert_h36m.md
MotionBERT-finetuned*27.521.6ckpt/motionbert_h36m.md

Human3.6m Dataset from official repo 1

ArchMPJPEAverage MPJPEP-MPJPEckptlogDetails and Download
MotionBERT*39.839.233.4ckpt/motionbert_h36m.md
MotionBERT-finetuned*37.737.232.2ckpt/motionbert_h36m.md

1 Please refer to the doc for more details.

Models with * are converted from the official repo. The config files of these models are only for validation. We don't ensure these config files' training accuracy and welcome you to contribute your reproduction results.