개요

 


 

1. 모델 설명 | Used to

모델 설명
  • 모델명: MediaPipe BlazePose GHUM 3D
  • 구체적인 설명
Lite (3MB size), Full (6 MB size) and Heavy (26 MB size) models, to estimate the full 3D body pose of an individual in videos captured by a smartphone or web camera.
Optimized for on-device, real-time fitness applications: Lite model runs ~44 FPS on a CPU via XNNPack TFLite and ~49 FPS via TFLite GPU on a Pixel 3. Full model runs ~18 FPS on a CPU via XNNPack TFLite and ~40 FPS via TFLite GPU on a Pixel 3.
Heavy model runs ~4FPS on a CPU via XNNPack TFLite and ~19 FPS via TFLite GPU on a Pixel 3.

 

Used to
Applications
3D full body pose estimation for single-person videos on mobile, desktop and in browser.

Domain & Users
 - Augmented reality
 - 3D Pose and gesture recognition
 - Fitness and repetition counting
 - 3D pose measurements (angles / distances)

Out-of-scope applications
Multiple people in an image.
 - People too far away from the camera (e.g. further than 14 feet/4 meters)
 - Head is not visible Applications requiring metric accurate depth
 - Any form of surveillance or identity recognition is explicitly out of scope and not enabled by this technology

 


 

2. Model type 및 Model architecture

모델 카드 인용
Convolutional Neural Network: MobileNetV2-like with customized blocks for real-time performance.

 

Model type: CNN

 

Model architecture : MobileNetV2

 


 

3. input, output

Inputs
Regions in the video frames where a person has been detected.
Represented as a 256x256x3 array with aligned human full body part, centered by mid-hip in vertical body pose and rotation distortion of (-10, 10) . Channels order: RGB with values in [0.0, 1.0].

Output(s)
33x5 array corresponding to (x, y, z, visibility, presence).
 - X, Y coordinates are local to the region of interest and range from [0.0, 255.0].
 - Z coordinate is measured in "image pixels" like the X and Y coordinates and represents the distance relative to the plane of the subject's hips, which is the origin of the Z axis. Negative values are between the hips and the camera; positive values are behind the hips.
Z coordinate scale is similar with X, Y scales but has different nature as obtained not via human annotation, by fitting synthetic data (GHUM model) to the 2D annotation. Note, that Z is not metric but up to scale.
 - Visibility is in the range of [min_float, max_float] and after user-applied sigmoid denotes the probability that a keypoint is located within the frame and not occluded by another bigger body part or another object.
 - Presence is in the range of [min_float, max_float] and after user-applied sigmoid denotes the probability that a keypoint is located within the frame.
[
  {
    score: 0.8,
    keypoints: [
      {x: 230, y: 220, score: 0.9, score: 0.99, name: "nose"},
      {x: 212, y: 190, score: 0.8, score: 0.91, name: "left_eye"},
      ...
    ],
    keypoints3D: [
      {x: 0.65, y: 0.11, z: 0.05, score: 0.99, name: "nose"},
      ...
    ],
    segmentation: {
      maskValueToLabel: (maskValue: number) => { return 'person' },
      mask: {
        toCanvasImageSource(): ...
        toImageData(): ...
        toTensor(): ...
        getUnderlyingType(): ...
      }
    }
  }
]

 


 

4. evaluation metric

pose의 metric : PDJ
  • average Percentage of Detected Joints: 감지된 부위의 평균 비율 오차
We consider a keypoint to be correctly detected if predicted visibility for it matches ground truth and the absolute 2D Euclidean error between the reference and target keypoint normalized by the 2D torso diameter projection
is smaller than 20%.
This value was determined during development as the maximum value that does not degrade accuracy in classifying pose / asana based solely on the key points without perceiving the original RGB image. The model is providing 3D coordinates, but the z-coordinate is obtained from synthetic data, so for a fair comparison with human annotations, only 2D coordinates are employed.

 

evaluation results
  • geographical
Evaluation across 14 regions of heavy, full and lite models on smartphone back-facing camera photos dataset results an average performance of 94.2% +/- 1.3% stdev with a range of [91.4%, 96.2%] across regions for the heavy model, an average performance of 91.8% +/- 1.4% stdev with a range of [89.2%, 94.0%] across regions for the full model and an average performance of 87.0% +/-2.0% stdev with a range of [83.2%, 89.7%] across regions for the lite model.
Comparison with our fairness criteria yields a maximum discrepancy between average and worst performing regions of 4.8% for the heavy, 4.8% for the full and 6.5% for the light model.
  • skin tone and gender
Evaluation on smartphone back-facing camera photos dataset results in an average performance of 93.6% with a range of [89.3%, 95.0%] across all skin tones for the heavy model, an average performance of 91.1% with a range of [85.9%, 92.9%] across all skin tones for the full model and an average performance of 86.4% with a range of [80.5%, 87.8%] across regions for the lite model. The maximum discrepancy between worst and best performing categories is 5.7% for the heavy model, 7.0% for the full model and 7.3% for the lite model.
Evaluation across gender yields an average performance of 94.8% with a range of [94.2%, 95.3%] for the heavy model, an average performance of 92.3% with a range of [91.2%, 93.4%] for the full model, and an average of 83.7% with a range of [86.0%, 89.1%] for the lite model.
The maximum discrepancy is 1.1% for the heavy model, 2.2% for the full model and 3.1% for the lite model.

+ Recent posts