Pose estimation is the task of using an ML model to estimate the pose of a person from an image or a video by estimating the spatial locations of key body joints (keypoints).
If you are new to TensorFlow Lite and are working with Android or iOS, explore the following example applications that can help you get started.
If you are familiar with the TensorFlow Lite APIs, download the starter MoveNet pose estimation model and supporting files.
Download starter model
If you want to try pose estimation on a web browser, check out the TensorFlow JS Demo.
Pose estimation refers to computer vision techniques that detect human figures in images and videos, so that one could determine, for example, where someone’s elbow shows up in an image. It is important to be aware of the fact that pose estimation merely estimates where key body joints are and does not recognize who is in an image or video.
The pose estimation models takes a processed camera image as the input and outputs information about keypoints. The keypoints detected are indexed by a part ID, with a confidence score between 0.0 and 1.0. The confidence score indicates the probability that a keypoint exists in that position.
We provides reference implementation of two TensorFlow Lite pose estimation models:
The various body joints detected by the pose estimation model are tabulated below:
An example output is shown below:
MoveNet is available in two flavors:
MoveNet outperforms PoseNet on a variety of datasets, especially in images with fitness action images. Therefore, we recommend using MoveNet over PoseNet.
Performance benchmark numbers are generated with the tool described here. Accuracy (mAP) numbers are measured on a subset of the COCO dataset in which we filter and crop each image to contain only one person .
Also, check out these use cases of pose estimation.