Real time pose estimation and human body detection without package using PoseTracker API (+20 fps IOS, Android & Web)

--

video of detection

After a lot of time spend at looking for the best options to create real time human body detection on mobile. A lot of package and AI tested on iOS and Android. I decided to create a way to help others dev to add this to there application without package and all the problems that come with them !

Easier and faster with PoseTracker API

PoseTracker is the first real-time posture detection API optimized for Web, iOS, and Android. There is no need to install a package, everything is provided by our API callable from a WebView 🚀.

This is how it’s work :

  1. You define your needs, such as exercises, reps, time between reps and then send them to our API.
  2. PoseTracker responds with a URL that will guide and analyze the user’s training.
  3. You can call the URL within a WebView in your mobile or web application.
  4. The WebView provides analysis, posture detection, and exercise recognition.
  5. We provide live feedback to your application, so that you can create your own design to display information to your users.

Creating a expo react native application with PoseTracker Free endpoint

For now, we will use the free PoseTracker endpoint to create a WebView that detects 17 points of the human body and returns them in order to display a human skeleton over the user’s camera. (So we are starting at step 3.)

1. Setup config and import required

Create an expo react native application and install these package :react-native-webview, expo-camera, react-native-svg

Define our App.js base and allow our app to access the user’s camera :

import { StyleSheet, Text, View} from 'react-native';
import { WebView } from 'react-native-webview'
import {useEffect, useState} from "react";
import { Camera } from 'expo-camera'

export default function App() {
const [hasPermission, setHasPermission] = useState(false)

//Our API need to have access to the devise camera
useEffect(() => {
;(async () => {
const { status } = await Camera.requestCameraPermissionsAsync()
setHasPermission(status === 'granted')
})()
}, [])

if (!hasPermission) return <View style={styles.container}><Text>The app needs access to your camera. Allow it in your device settings</Text></View>

return (
<View style={styles.container}>
<WebView
javaScriptEnabled={true}
domStorageEnabled={true}
allowsInlineMediaPlayback={true}
mediaPlaybackRequiresUserAction={false}
style={{
width: width,
height: height,
zIndex: 1,
}}
source={{
uri: `https://posture-detector-api.vercel.app/posture-detect?width=${width}&height=${height}`,
}}
originWhitelist={['*']}
injectedJavaScript={jsBridge}
onMessage={(event) => {
const info = JSON.parse(event.nativeEvent.data)
webViewCallback(info)
}}
/>
{renderPose(currentPoses)}
</View>
);
}

2. Data exchange between PoseTracker WebView and our app :

We need to build a WebView and set the following source : https://posture-detector-api.vercel.app/posture-detect?width=${width}&height=${height}

The endpoint takes 2 params width & height for the width and height of the camera output.

import { WebView } from 'react-native-webview'

const width = 300
const height = 300

<WebView
javaScriptEnabled={true}
domStorageEnabled={true}
allowsInlineMediaPlayback={true}
mediaPlaybackRequiresUserAction={false}
style={{
width: width,
height: height,
zIndex: 1,
}}
source={{
uri: `https://posture-detector-api.vercel.app/posture-detect?width=${width}&height=${height}`,
}}
originWhitelist={['*']}
injectedJavaScript={jsBridge}
onMessage={(event) => {
const info = JSON.parse(event.nativeEvent.data)
webViewCallback(info)
}}
/>

We handle WebView responses with what’s inside the onMessage param.

The endpoint use what we are sending throw injectedJavaScript to send back data to our App :

  const jsBridge = `
(function() {
window.webViewCallback = function(info) {
window.ReactNativeWebView.postMessage(JSON.stringify(info));
}
})();
`

This bridge allow us to receive data from the WebView in our app.

3. Handle data received

onMessage={(event) => {
const info = JSON.parse(event.nativeEvent.data)
webViewCallback(info)
}}

With this WebView param we are parsing the data received and send them to webViewCallback :

//This func receive data from the WebView
const webViewCallback = (info) => {
switch (info.type) {
case 'body poses':
return handlePoses(info.poses)
default:
return handlePoses(info)
}
}

Check out the API documentation to learn more about what’s inside the information received from WebView.

For this tutorial we will only handle one type of message info.type = 'body poses' , it look like :

Object {
"type": "body poses"
"poses": Object {
"keypoints": Array [
Object {
"name": "nose",
"score": 0.17302913963794708,
"x": 51.53513909486614,
"y": 292.0035267462864,
},
.....
Object {
"name": "right_ankle",
"score": 0.0781388059258461,
"x": 42.53210480412571,
"y": 278.1044364463628,
},
],
"score": 0.28030067682266235,
}
}

4. Drawing the body skeleton over the camera output

First we need to store these poses :

const [currentPoses, setCurrentPoses] = useState()

const handlePoses = (poses) => {
//console.log('current body poses :', poses)
setCurrentPoses(poses)
}

Then inside the main View we pass this state to {renderPose(currentPoses)} :

//Check if poses has keypoints to draw 
export const renderPose = (poses) => {
if (poses && poses.keypoints?.length > 0) {
return drawSqueleton(poses)
} else {
return <View />
}
}

Now we can draw the skeleton using react-native-svg :

//threshold accuracy model
const MIN_KEYPOINT_SCORE = 0.3

export const drawSqueleton = (poses) => {
const circles = drawCircles(poses)
const lines = drawLines(poses)

return (
<Svg
style={{
position: 'absolute',
left: -50,
zIndex: 30,
transform: [{ scaleX: -1 }]
}}>
{circles}
{lines}
</Svg>
)
}

export const drawCircles = (poses) => {
const circles = poses.keypoints
.filter((k) => (k.score ?? 0) > MIN_KEYPOINT_SCORE)
.map((k) => {
const x = k.x
const y = k.y

return (
<Circle
key={`skeletonkp_${k.name}`}
cx={x}
cy={y}
r='8'
strokeWidth='4'
fill='#cff532'
stroke='#FFC300'
/>
)
})

return circles
}

export const drawLines = (
poses,
showFacePoints = true
) => {
let lines = []
// key points by name
const points = new Map()
poses.keypoints.map((point) => points.set(point.name, point))

lines.push(drawLine(points.get('left_shoulder'),points.get('right_shoulder')))
lines.push(drawLine(points.get('left_hip'),points.get('right_hip')))

// left arm
lines.push(drawLine(points.get('left_shoulder'),points.get('left_elbow')))
lines.push(drawLine(points.get('left_elbow'),points.get('left_wrist')))

// left side
lines.push(drawLine(points.get('left_shoulder'),points.get('left_hip')))

// left leg
lines.push(drawLine(points.get('left_hip'),points.get('left_knee')))
lines.push(drawLine(points.get('left_knee'), points.get('left_ankle')))

// right arm
lines.push(drawLine(points.get('right_shoulder'),points.get('right_elbow')))
lines.push(drawLine(points.get('right_elbow'),points.get('right_wrist')))

// right side
lines.push(drawLine(points.get('right_shoulder'),points.get('right_hip')))

// right leg
lines.push(drawLine(points.get('right_hip'),points.get('right_knee')))
lines.push(drawLine(points.get('right_knee'), points.get('right_ankle')))

if (showFacePoints) {
lines.push(drawLine(points.get('right_ear'), points.get('right_eye')))
lines.push(drawLine(points.get('right_eye'), points.get('nose')))
lines.push(drawLine(points.get('nose'), points.get('left_eye')))
lines.push(drawLine(points.get('left_eye'), points.get('left_ear')))
}
return lines
}

function drawLine(pointA, pointB) {
if (pointA.score < MIN_KEYPOINT_SCORE || pointB.score < MIN_KEYPOINT_SCORE)
return

const x1 = pointA.x
const y1 = pointA.y

const x2 = pointB.x
const y2 = pointB.y

return (
<Line
key={`skeletonkp_line_${pointA.name}_${pointB.name}`}
x1={x1}
y1={y1}
x2={x2}
y2={y2}
stroke='#cff532'
strokeWidth={4}
/>
)
}

We did it!

The next step : sign up for our API and start detecting movements 💪

Source code Github for the react native App :

Any needs for posturale detection application ? Visit www.movelytics.fr

--

--

Fabrice Sepret (CEO at Movelytics)

Using artificial intelligence to fight against sedentary lifestyle ! Try our technology with WorkoutBattle (available on iOS & Android stores)