Getting Started with ML5.js — Tutorial Part IV: Yoga Pose Detection

Beginner-friendly tutorial to train and build your own yoga posture detection model in the browser

Veronica Peitong Chen
AIxDESIGN
10 min readJan 13, 2022

--

_________________________________________________________________

PSA: It is a key value for us at AIxDESIGN to open-source our work and research. The forced paywalls here have led us to stop using Medium so while you can still read the article below, future writings & resources will be published on other platforms. Learn more at aixdesign.co or come hang with us on any of our other channels. Hope to see you there 👋

Machine Learning/ML is changing the world, and it has gradually evolved into a new medium and design material for artists, designers, and creatives; however, algorithms and programming can become a great entry barrier for many to understand and explore the capability that ML offers.

ML5.js exists precisely for this purpose. With their entry-level interface, ML5.js made ML more approachable and easy to use, specifically with designers and artists in mind.

Event poster for Erik’s ML5.js workshops

In November and December 2021, we hosted a series of ML5.js workshops with Erik Katerborg — a creative technologist, lecturer teaching Creative Media and Game Technologies at the Rotterdam University for Applied Sciences, and active AIxDesign community member.

To spread the word and invite more people to make their first steps into coding, we created this article as a written version of the workshop for anyone to follow at any moment in time. If video is more your thing, you can also watch a recording of the workshop on our Youtube channel here.

We have published Part I, Part II, & Part III of this workshop series. Make sure to check it out here.

In this article (Tutorial Part IV), we will use the trained model to build your own yoga posture detection website. Some of the foundational knowledge we have covered in Tutorial Part III.

Let’s build your yoga pose detector!

In this tutorial, we will show you how to build a yoga pose detector in your browser. First, we need to use PoseNet to detect body key points which will be documented in a CSV file. After generating the dataset, we can use it to train a ML model that can be used in the next step to detect and classify different yoga poses. Last but not the least, if you are feeling adventurous, we can feed the webcam and show a live yoga pose detection!

  • Step 0. Setup
  • Step 1. Get yoga data using PoseNet
  • Step 2. Train & save the model
  • Step 3. Load the model and classify yoga pose
  • Step 4. Live detection using your webcam

Don’t be intimidated by the number of steps! The goal is to show you how to do everything, but you can take shortcuts by using the file we provided to skip some of the steps ;)

Sounds good? Let’s get started.

Step 0. Setup

In the first part of the tutorial, we will be looking at how to extract yoga pose key points to create a CSV file, which could be used to train the Machine Learning model.

As usual, let’s set up the Glitch environment. Remix part3-keypoints from Erik’s Glitch page.

Step 0/4 — Remix your own

In addition, download a set of yoga images from Kaggle.

Step 0/4 — Download yoga poses

In Glitch, click the Files button to upload one of the yoga pose images you just downloaded. The newly detected pose will be automatically drawn over the image using PoseNet. The red nodes represent body key points and the red lines represent the skeleton/connections between the nodes.

Step 0/4 — Upload a yoga pose

In script.js, you can view what is happening behind the curtain and customize the display of the key points/skeletons.

To get the key points from the yoga images, we will need help from PoseNet using ml5.js. In our Tutorial Part II, we have gone through some basics of PoseNet. So if there are any questions, make sure to check the tutorial out.

To help you understand what is going on in the code, let’s look at the functions one by one.

  • function setup()is where you can define the drawing style, as well as call the PoseNet function through ml5.js’s API. Whenever there’s a new pose, it will be put into a variable called poses, and it will update the pose visualization.
  • function modelReady()allows us to upload our files and run the image through PoseNet.
  • function draw()draws the image as well as the skeleton and key points.
  • The following functions function drawKeypoints(), function drawSkeleton() , and function setScale()help with the drawing of the poses and scaling the images.

Step 1. Get yoga data using PoseNet

To start, let’s take a look at what is being detected by PoseNet. Uncomment the line in function logKeyPoints()

console.log(poses)
Step 1/4 — View detection results

If you open the preview in a new window and open the console, you should be able to see an array containing all the key points for the image pose. PoseNet helps us to document body parts’ names and positions.

Step 1/4 — View pose keypoints in the console

For instance, in leftAnkle, we learned that the x-position of the left ankle is 591.32 and the y-position is 1651.33.

leftAnkle: {x: 591.3214372568093, y: 1651.3324416342414, confidence: 0.5109665393829346}

This is super helpful because we can document all the key points for any yoga pose images and load the data to a CSV, which can be used to train a ML model to detect and classify yoga poses.

Let’s go back to script.js, and edit the following lines in function logKeyPoints()to get all the x and y keypoints and view them in the console.

function logKeyPoints() {

let points = []
for (let keypoint of poses[0].pose.keypoints){
points.push(Math.round(keypoint.position.x))
points.push(Math.round(keypoint.position.y))
}
console.log(points)
}
Step 1/4 — Get x and y positions

Now we can use the data to create a CSV file. Erik has included one in the Glitch project, named yoga.csv. Notice the top line lists the names for all key points, and the last item is yogapose. Correspondingly, the second line contains all the values of the position as well as the classification of the yoga pose, goddessin this case.

Let’s not worry about getting a lot of data now, because Erik has prepared everything we need to proceed to Step 2. In Step 2, we will be using the data in the provided CSV and train a yoga pose detection model.

Step 2. Train & save the model

In Tutorial Part III, we covered the process of training your own model. In this step, we will show you what code changes you need to make to train a yoga model.

In the finished file from Tutorial Part III, script.jscontains a section of the code to train yoga poses at the end of the page, shown below. In function start(), replace everything in const options = {...} with this code.

const options = {
dataUrl: './yoga.csv',
inputs: ['leftAnklex','leftAnkley','leftEarx','leftEary','leftElbowx','leftElbowy','leftEyex','leftEyey','leftHipx','leftHipy','leftKneex','leftKneey','leftShoulderx','leftShouldery','leftWristx','leftWristy','nosex','nosey','rightAnklex','rightAnkley','rightEarx','rightEary','rightElbowx','rightElbowy','rightEyex','rightEyey','rightHipx','rightHipy','rightKneex','rightKneey','rightShoulderx','rightShouldery','rightWristx','rightWristy'],
outputs: ['yogapose'],
task: 'classification',
debug: true

}

By uncomment myNeuralNetwork.save(), you should be able to download the 3 files for the trained model.

Step 2/4 — Edit the file from Tutorial III

Stressed out about training the model or run into any issues? No worries. In Step 3, Erik also prepared a pre-trained yoga pose detection model for us to use. So please feel free to skip this step and jump to step 3 if needed.

Step 3. Load the model and classify yoga pose

Once you understood the previous steps, let’s start fresh and use a new file to see how everything comes together. Access and remix the file here.

Head over to script.js and you should see a few familiar code blocks, including loading the PoseNet and drawing the key points and skeleton.

The first thing we will need to do is to access the yoga model Erik has prepared for us. You can simply copy and replace the following code and replace function setup().

function setup() {
ctx.strokeStyle = 'red'
ctx.fillStyle = "white"
ctx.lineWidth = 3


neuralNetwork = ml5.neuralNetwork({ task: 'classification' })
const modelInfo = {
model: './',
metadata: './',
weights: '',
}
neuralNetwork.load(modelInfo, yogaModelLoaded)

}
Step 3/4 — Access the yoga model

To ensure access to the yoga model files, we can follow the same model loading steps we did in Tutorial Part III. To recap, you want to make sure inside modelInfo, the correct file path is filled.

Erik has already uploaded the model files for us, and you can simply update the content of modelInfo.

function setup() {
ctx.strokeStyle = 'red'
ctx.fillStyle = "white"
ctx.lineWidth = 3

neuralNetwork = ml5.neuralNetwork({ task: 'classification' })
const modelInfo = {
model: './model/model.json',
metadata: './model/model_meta.json',
weights: '
https://cdn.glitch.me/e0290f89-3adb-4ab8-9de7-bdc16e8e827a%2Fmodel.weights.bin?v=1637783808826',
}
neuralNetwork.load(modelInfo, yogaModelLoaded)

}

Your code should look like this.

Step 3/4 — Review code

The last step we need to do is to log the key points of the newly loaded image, document the key points array on screen, and send the array through the yoga model to classify the pose.

Simply scroll down to the bottom of the script.jsfile and replace the content of function logKeypoints(). Similar to the previous steps, we are creating an array called points, and acquiring the key points using PoseNet whenever the user upload an image. The code then displays the string on the screen for us to refer to and lastly, send the points array through the Neural Network to classify the yoga pose.

function logKeyPoints() {

let points = []
for (let keypoint of poses[0].pose.keypoints){
points.push(Math.round(keypoint.position.x))
points.push(Math.round(keypoint.position.y))
}
numbers.innerHTML = points.toString()


neuralNetwork.classify(points,yogaResult)

}

Your code should look like this.

Step 3/4 — Review code

If you refresh the preview panel and upload your own yoga image, you should be able to see the classification of the yoga pose, the confidence score as well as the body key points of the image.

There you go, here is your own yoga pose detection game!

Step 3/4 — Refresh and test!

Step 4. Live detection using your webcam

You already finished most parts of the tutorial and this step is completely optional; however, you will be able to use the feed from your webcam and have the platform to detect the yoga pose live!

Again, Erik has carefully prepared the file for us. You can remix the code and test the model yourself.

That’s everything for this tutorial and we hope you learned a lot!

Ran Into Any Issues? No Worries.

You can always review the workshop video on Youtube: https://www.youtube.com/watch?v=dnk6kT38sBo

In addition, we have provided the finished file that you can access at:

Conclusion & Next steps

We hope you had fun and gained some new perspectives and ideas to bring back to your own practice. And of course, a big shout out to our brilliant host Erik Katerborg who showed us the potential and fun creative applications of doing Machine Learning in the browser using ML5.

We’d love to hear your key takeaways in a reply or a post if you’re willing to share! Please leave a like and share with your friends if you found the article helpful!

Links to Workshop Materials

To rewatch the workshop, please head over to our Youtube Channel where we have uploaded the recording:

This is Erik’s Glitch account where you can find all his projects: https://glitch.com/@KokoDoko.

Again, you can always access the finished project:

Helpful Resources

Beginners guide to ML5 by the Coding Train — https://thecodingtrain.com/learning/ml5/

Made with Tensorflow.js Youtube channel — https://www.youtube.com/watch?v=h9i7d4R36Lw&list=PLQY2H8rRoyvzSZZuF0qJpoJxZR1NgzcZw

ML5.js documentation — https://learn.ml5js.org/

Interested in More Workshops?

Stay tuned to our Eventbrite page for upcoming workshops, keynotes, and networking events!

Thank you, Erik!

This tutorial is fully based on the workshop developed and hosted by Erik for AIxDesign in November & December 2021. So big shout out to Erik for this amazing work! Erik Katerborg is a creative technologist and lecturer in creative media at the Rotterdam University for Applied Sciences. He is especially interested in making technology and coding more accessible to people with a creative or design background.

To stay in touch, you can connect with Erik on Linkedin, Twitter, or Instagram.

About AIxDesign

AIxDesign is a place to unite practitioners and evolve practices at the intersection of AI/ML and design. We are currently organizing monthly virtual events (like this one), sharing content, exploring collaborative projects, and developing fruitful partnerships.

To stay in the loop, follow us on Instagram, Linkedin, or subscribe to our monthly newsletter to capture it all in your inbox. You can now also find us at aixdesign.co.

--

--

Veronica Peitong Chen
AIxDESIGN

Experience designer at Adobe on AI/ML Design | Harvard Alumni